Not totally unsurprising though. This might sound weird, but LLM's don't really know what a "letter" is. They process text as essentially whole words, or at least phonics sized word chunks.
They pick up some spelling through training, but it's still sort of an alien concept to the architecture.
This is what people don’t understand. Adding references doesn’t matter cause the program is stringing word tokens that it associates together, that’s it. There is no cognition. It’s just suggested text on steroids.