i've been seeing stories on here today from teachers about how their students are using chatgpt as a "search engine" and getting outright combative if you challenge the weird strange wrong shit they're learning from it
so it's a lil bad in more than one way
Distilled and fine tuned models that can possibly just do language tasks only as well as one human can be run quite efficiently. But they are not that useful because they cannot be continuously trained unlike humans.
That's kind of a good thing. Continual fine-tuning on its own ideations and results of its decisions would yield a unique internal language of thought and the development of a sense of self. In other words, if we don't accidentally want to make a person, be in charge of the fine-tuning process.