At root, Large Language Models are just autocomplete. They do it fast, and in far more complex and lengthy ways than the autocomplete on your phone. But the basic function is the same. That so many are using them for different purposes (they're not google!) is both amusing and horrifying
Google's AI can pull together nouns that frequently appear in proximity, but it can't tell the relationship between them. "Abraham Lincoln" and "13th Amendment" often appear together and Amendments are often said to be "crafted," so the algorithm assumes that history is like math.
LLMs essentially place bets on which letters and words ("tokens") are most likely to follow or be connected with the inputted tokens, based upon the enormous amounts of text/tokens they're "trained" on. They are often correct, but many times are spectacularlly incorrect.
The biggest mistake we've made is to use the word "intelligence" to describe what are basically large-scale pattern-recognition and -reproduction tools. And from there, all the anthropomorphic language describing them grows, but also suggests they do more than what they're designed to do.
We're told the future is AI, the use of LLMs in every context from education to health care to business is inevitable, that the extant problems will be fixed in the next release (always coming, never arrived). It's gaslighting.
They're selling the simulacra, assuming that most of us won't care. I'm afraid they may be right. In our vibes based culture just "feeling human" may well be enough to satisfy.
You might like The AI Mirror. Heard an interesting interview with her. AI is just reflecting the work that humans have always done. It isn't creating anything new, but shuffling things around. Also just a huge violation of copywriter
global.oup.com/academic/pro...
I’ve only had the tiniest secondhand exposure to edtech but it seems very hype-driven and also like the players in the space are as low-effort as possible while not losing all their customers.
I will never not repost this piece when this discussion hits my feed: "Anthropomorphising concepts such as using 'hallucination' as a term help dismiss the fact that statistical responses are completely disconnected from meaning and facts." softwarecrisis.dev/letters/llme...
U Iowa's Belin-Blank Center for Gifteds used to use the ACT. I know a 3rd Grader who scored 95th percentile on the college bound test. Let the LLMs take the ACT and SAT and report results.