AI dangers – Matte Lim https://archive.mattelim.com Design Tech Art Sun, 14 May 2023 03:03:14 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.3 https://archive.mattelim.com/wp-content/uploads/2022/04/mattelim8.png AI dangers – Matte Lim https://archive.mattelim.com 32 32 Do AIs “think”? The challenge of AI anthropomorphization https://archive.mattelim.com/do-ais-think-the-challenge-of-ai-anthropomorphization/ Sun, 14 May 2023 03:03:14 +0000 https://archive.mattelim.com/?p=788 There has been an acceleration of artificial intelligence (AI) in the past year, especially in chatbot AIs. OpenAI’s ChatGPT became the fastest app to reach 100 million monthly active users within a short span of two months. For reference, the runner-up TikTok took nine months — more than four times — to reach those numbers. ChatGPT’s release has sparked an AI race, pushing tech giants Google and Alibaba to release their versions of AI chatbots, namely Bard and Tongyi Qianwen respectively. ChatGPT marks a big change in the way we interface with machines — the use of human language. As chatbots become increasingly sophisticated, they will begin to exhibit more “agentic” behavior. OpenAI defines “agentic” in the technical report released alongside GPT-4, that is the ability of AI to “accomplish goals which may not have been concretely specified and which have not appeared in training; focus on achieving specific, quantifiable objectives; and do long-term planning.” The combination of the use of human language as well as increasingly “agentic” capabilities will make it very challenging for humans to not anthropomorphize chatbots and AI in general. The anthropomorphization of AI may lead to society becoming more accepting of different use cases for AI, which could become problematic.

In a podcast interview with Kara Swisher, Sam Altman, the CEO of OpenAI, talked about naming their large language model (LLM) GPT-4 using a combination of “letters plus a number” to avoid people from anthropomorphizing the AI. This has not stopped other AI companies from giving their creations human names. Naming aside, it is almost impossible to avoid using human terms to describe AI. The use of the word “agentic”, with quotation marks, points to how the development of AI is butting up against our current vocabulary. We use words that are conventionally reserved for human minds. When chatbots take time to respond to prompts, it is difficult not to label that processing of information as some form of “thinking”. When a chatbot is able to process our prompt in the way that we intended, it makes it feel like it “understands” what we are communicating. The leading issues around AI similarly use human terminology. “Hallucination” occurs when a chatbot confidently provides a response that is completely made up. A huge area of AI research is dedicated to the “alignment” problem, which according to Wikipedia, “aims to steer AI systems towards humans’ intended goals, preferences, or ethical principles.” To the uninformed, this sounds very much like civic and moral education for students.

Humans tend toward anthropomorphism. We explain things for human understanding and often anthropomorphism helps to communicate abstract ideas. Nature documentary hosts would give names to every individual in a pride of lions and lionesses, describe their fights as familial or tribal feuds, and dramatize the animals’ lives from a human perspective. The 18th-century Scottish philosopher Adam Smith uses the term “invisible hand” to describe how self-interest can lead to beneficial social outcomes. Researchers have found that anthropomorphic language can help us learn and remember what we have learned. As AIs exhibit increasingly human-like capabilities, it will be a challenge for people to not anthropomorphize them because we will use human-analogous words to describe them.

If we are not careful in delineating AI, which is ultimately a set of mathematical operations, from its human-like characteristics, we may become more accepting of using it for other purposes. One particularly tricky area is the use of AI as relational agents. The former U.S. Surgeon General, Vivek Murthy called loneliness a public health “epidemic”, this view is echoed by many. A 2019 survey by Cigna, a health insurer, found that 61 percent of Americans report feeling lonely. It is not unimaginable for people to think that conversational AI can help relieve loneliness, which the US CDC reports is linked to serious health conditions in older adults. If there is demand for such services and money to be made, businesses will meet that demand, especially since most cutting-edge AI research is conducted by commercial enterprises. In fact, there are already similar situations occurring. In Japan, owners of the Sony Aibo robot dog are known to conduct funerals for their robot companions. While the robot dogs are definitely not alive, they have touched the lives of their owners in a real way. An article in the San Francisco Chronicle reported on how a Canadian man created a chatbot modeled after his dead fiancé to help with his grief. If chatbots were to make it easier for people to feel less lonely, would it lower the effort that people put into forging real relationships with actual full human beings, which may not be as acquiescent as their artificial companions? How would human society evolve in those circumstances? As technology has been often used as a wedge to divide society, would AI drive us further apart?

Besides the more overt issues that come with anthropomorphizing AI, there may able be less perceptible changes that occur beneath our noses. Machines are tools that humans use to multiply and extend our own physical and mental efforts. Until now, the user interface between humans and machines was distinct from human communication. We turn dials and knobs, flick switches, and push buttons to operate physical machines. We drag a mouse, type into a screen, and use programming languages to get computers to do our bidding. Now, we use natural language to communicate with chatbots. For the first time in history, the medium in which we interact with a machine is the same as that of cultural communication. We may eventually come to a point where most natural language communication takes place not between humans, but with a machine. How might that change language over time? How would that change the way that humans interact with one another? In a TED talk by Greg Brockman, President of OpenAI, he joked about saying “please” to ChatGPT, adding that it is “always good to be polite.” However, the fact is that machines do not have feelings — do we dispense with courtesies in our communication with AI? If we continue to say “please” and “thank you”, are we unwittingly and subconsciously anthropomorphizing AI?

Perhaps we need to expand our vocabulary to distinguish between human and AI behavior. Instead of using quotation marks, perhaps we could add a prefix that suggests the simulated nature of the observed behavior: sim-thinking, sim-understanding, sim-intentions. It does not quite roll off the tongue, but it may help us be more intentional in our descriptions. In response to an interviewer’s questions about how LLMs are “just predicting the next word”, Geoffrey Hinton, a pioneer in AI research, responded, “What do you need to understand about what’s being said so far in order to predict the next word accurately? And basically, you have to understand what’s being said to predict that next word, so you’re just autocomplete too.” Hinton got into AI research through cognitive science and wanted to understand the human mind. His response just goes to show how little we comprehend whatever happens in our heads. Hopefully, AI can someday help us with this. The tables might flip and we may see AI as our reflection — maybe we find out sim-thinking and thinking are not that different after all — if we survive the AI upheaval that is.

]]>