And yet, when I ask a question with what appears to be good context (a properly engineered prompt, if you will,) I get apparently useful results.
For example:
This question was about what damp leasing means in the context of airline operations. I had no idea. Now I do.
I've subscribed to Aviation Week for over two decades.
My experience is that one of the biggest barriers to understanding the nuances of a specialty is not knowing what the various technical terms means, i.e. the jargon.
I had never heard the terms damp leasing or wet leasing before. ChatGPT explained them well, given a sufficient prompt.
I argue that makes ChatGPT useful.
Dear Browser Developers: Please develop an on/off function so we can decide for ourselves whether we want to view AI images. I don't want to spend half of my browsing time figuring out what's real and what isn't. Congress: Required labeling would be nice as well.
Artificial Dumbs already drive our cars!
That's why you have to be ready to take over from them at any instant. A type of job that we know for sure that humans suck at.
If I let go of the wheel of my Jeep (while driving) the results will be double-plus suboptimal.
My insurance carrier will be angry.
The police may have questions.