When I first started programming many years ago, there was a BASIC program known as “Eliza” that employed early natural language processing — you could enter sentences on the keyboard, and Eliza would output her response to you on a monitor. Essentially, the program’s premise was that Eliza was your psychotherapist and if you made certain statements or asked certain questions it would respond, sometimes eerily, in a manner like what you would expect during an actual therapy session. There were limitations to Eliza, however. If you interacted with the program for more than a brief period, you would soon get an answer that was nonsensical. Eliza could emulate intelligent behavior but could not learn or adapt. With those restrictions, the possibility for Eliza to cause humans any actual harm was extremely limited, and Eliza would in hindsight have been better categorized as simulated, rather than artificial, intelligence. If a programmer had maliciously modified Eliza’s code to somehow poison the simulated therapy session causing the “patient” emotional distress, it would have been a rather simple exercise to find the programmer who entered the malicious code at fault for any resulting harm.
Fast forward to the present day, and the release of “chatbots.” According to IBM, “Modern AI chatbots now use natural language understanding (NLU) to discern the meaning of open-ended user input, overcoming anything from typos to translation issues. Advanced AI tools then map that meaning to the specific “intent” the user wants the chatbot to act upon and use conversational AI to formulate an appropriate response. These AI technologies leverage both machine learning and deep learning—different elements of AI, with some nuanced differences—to develop an increasingly granular knowledge base of questions and responses informed by user interactions.” (https://www.ibm.com/topics/chatbots).
When I asked the chatbot ChatGPT about AI, it responded that: