oh my god now there's a "future of prison" startup that says it will use AI and brain implants to plant fake memories into the brains of criminals, "rehabilitating" them in minutes instead of years and I am going to spend the rest of the day screaming now
sciencetimes.com/articles/509...
This whole company is obviously making shit up and selling snake oil, but also it is important for everyone to understand that it is morally good to set fire to this company's headquarters
Yeah, like - it would be terrifying if it worked, but now I just wonder how these idiots can get people to uncritically publish shit about a technology that does not exist and doesn't work.
There's "doesn't work" like ChatGPT doesn't work and Rabbit doesn't work or whatever.
But this is more like how the millennium falcon doesn't exist and doesn't work, even though someone made a youtube short about building a lego model of it.
yeah like if this "worked" the way ChatGPT worked, we'd have tons of at least stories of people who cured PTSD or whatever by having new memories injected, or whatever
we'd have a queue of desperate people trying to figure out how to build the tech on their own even it it was shit
etc
Meanwhile, we don’t even know what memory is, how (or if!!) it is stored in the brain, how to reliably access it, etc. Talking about implanting false memories and feelings is just…we would have no idea even how to START doing this. This isn’t a legitimate business plan
Right. You need to invent like three or four large technologies even before you start building the space prison with time-slowing holodecks. Tho also, to reiterate, any company that tries this should literally be set on fire.
These fucky gits should pay me to write their fictitious business plans since they think their shit “AI” is going to take my job by writing all the books within 3 years
Just here from Twitter, having read a bunch of these same magical thinkers actually arguing with *Grady Booch,* who has said repeatedly that LLMs are architecturally incapable of reasoning. Oh, but “Turing Complete” & “neural net” & “symbolic transformers!!!”
None of them has a clue how LLMs work.
At the most superficial levels it’s fine. Information goes in, processing occurs, an output is produced. But yeah, any deeper than that is trouble. “Human memory is stored in the-“ no it’s not. “Computers think about-“ no they don’t.