Kind of funny how Clarke's HAL, Asimov's robots, etc. assumed hyper-rational AI strictly obeying its design principles and rules, and how that could play out problematically. Our reality is the exact opposite, they can't even comprehend when they're lying or being gross or contradicting themselves.
Classic sci-fi authors: what if somebody told an AI to lie but that breaks its fundamental rules and so the untenable contradiction drives it crazy?
Reality: lol 3 + 5 is 27, see here's an incoherent quote from a nonexistent expert. Also that's how the Supreme Court ruled in a case I just made up.
I will expropriate this image as it will become my sole reaction to all and any AI development.
but I will wait and give thee thy honor to post such reaction when AI start to take over higher up administrative jobs.