anthropomorphization – Matte Lim https://archive.mattelim.com Design Tech Art Sun, 14 May 2023 03:03:14 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.3 https://archive.mattelim.com/wp-content/uploads/2022/04/mattelim8.png anthropomorphization – Matte Lim https://archive.mattelim.com 32 32 Do AIs “think”? The challenge of AI anthropomorphization https://archive.mattelim.com/do-ais-think-the-challenge-of-ai-anthropomorphization/ Sun, 14 May 2023 03:03:14 +0000 https://archive.mattelim.com/?p=788 There has been an acceleration of artificial intelligence (AI) in the past year, especially in chatbot AIs. OpenAI’s ChatGPT became the fastest app to reach 100 million monthly active users within a short span of two months. For reference, the runner-up TikTok took nine months — more than four times — to reach those numbers. ChatGPT’s release has sparked an AI race, pushing tech giants Google and Alibaba to release their versions of AI chatbots, namely Bard and Tongyi Qianwen respectively. ChatGPT marks a big change in the way we interface with machines — the use of human language. As chatbots become increasingly sophisticated, they will begin to exhibit more “agentic” behavior. OpenAI defines “agentic” in the technical report released alongside GPT-4, that is the ability of AI to “accomplish goals which may not have been concretely specified and which have not appeared in training; focus on achieving specific, quantifiable objectives; and do long-term planning.” The combination of the use of human language as well as increasingly “agentic” capabilities will make it very challenging for humans to not anthropomorphize chatbots and AI in general. The anthropomorphization of AI may lead to society becoming more accepting of different use cases for AI, which could become problematic.

In a podcast interview with Kara Swisher, Sam Altman, the CEO of OpenAI, talked about naming their large language model (LLM) GPT-4 using a combination of “letters plus a number” to avoid people from anthropomorphizing the AI. This has not stopped other AI companies from giving their creations human names. Naming aside, it is almost impossible to avoid using human terms to describe AI. The use of the word “agentic”, with quotation marks, points to how the development of AI is butting up against our current vocabulary. We use words that are conventionally reserved for human minds. When chatbots take time to respond to prompts, it is difficult not to label that processing of information as some form of “thinking”. When a chatbot is able to process our prompt in the way that we intended, it makes it feel like it “understands” what we are communicating. The leading issues around AI similarly use human terminology. “Hallucination” occurs when a chatbot confidently provides a response that is completely made up. A huge area of AI research is dedicated to the “alignment” problem, which according to Wikipedia, “aims to steer AI systems towards humans’ intended goals, preferences, or ethical principles.” To the uninformed, this sounds very much like civic and moral education for students.

Humans tend toward anthropomorphism. We explain things for human understanding and often anthropomorphism helps to communicate abstract ideas. Nature documentary hosts would give names to every individual in a pride of lions and lionesses, describe their fights as familial or tribal feuds, and dramatize the animals’ lives from a human perspective. The 18th-century Scottish philosopher Adam Smith uses the term “invisible hand” to describe how self-interest can lead to beneficial social outcomes. Researchers have found that anthropomorphic language can help us learn and remember what we have learned. As AIs exhibit increasingly human-like capabilities, it will be a challenge for people to not anthropomorphize them because we will use human-analogous words to describe them.

If we are not careful in delineating AI, which is ultimately a set of mathematical operations, from its human-like characteristics, we may become more accepting of using it for other purposes. One particularly tricky area is the use of AI as relational agents. The former U.S. Surgeon General, Vivek Murthy called loneliness a public health “epidemic”, this view is echoed by many. A 2019 survey by Cigna, a health insurer, found that 61 percent of Americans report feeling lonely. It is not unimaginable for people to think that conversational AI can help relieve loneliness, which the US CDC reports is linked to serious health conditions in older adults. If there is demand for such services and money to be made, businesses will meet that demand, especially since most cutting-edge AI research is conducted by commercial enterprises. In fact, there are already similar situations occurring. In Japan, owners of the Sony Aibo robot dog are known to conduct funerals for their robot companions. While the robot dogs are definitely not alive, they have touched the lives of their owners in a real way. An article in the San Francisco Chronicle reported on how a Canadian man created a chatbot modeled after his dead fiancé to help with his grief. If chatbots were to make it easier for people to feel less lonely, would it lower the effort that people put into forging real relationships with actual full human beings, which may not be as acquiescent as their artificial companions? How would human society evolve in those circumstances? As technology has been often used as a wedge to divide society, would AI drive us further apart?

Besides the more overt issues that come with anthropomorphizing AI, there may able be less perceptible changes that occur beneath our noses. Machines are tools that humans use to multiply and extend our own physical and mental efforts. Until now, the user interface between humans and machines was distinct from human communication. We turn dials and knobs, flick switches, and push buttons to operate physical machines. We drag a mouse, type into a screen, and use programming languages to get computers to do our bidding. Now, we use natural language to communicate with chatbots. For the first time in history, the medium in which we interact with a machine is the same as that of cultural communication. We may eventually come to a point where most natural language communication takes place not between humans, but with a machine. How might that change language over time? How would that change the way that humans interact with one another? In a TED talk by Greg Brockman, President of OpenAI, he joked about saying “please” to ChatGPT, adding that it is “always good to be polite.” However, the fact is that machines do not have feelings — do we dispense with courtesies in our communication with AI? If we continue to say “please” and “thank you”, are we unwittingly and subconsciously anthropomorphizing AI?

Perhaps we need to expand our vocabulary to distinguish between human and AI behavior. Instead of using quotation marks, perhaps we could add a prefix that suggests the simulated nature of the observed behavior: sim-thinking, sim-understanding, sim-intentions. It does not quite roll off the tongue, but it may help us be more intentional in our descriptions. In response to an interviewer’s questions about how LLMs are “just predicting the next word”, Geoffrey Hinton, a pioneer in AI research, responded, “What do you need to understand about what’s being said so far in order to predict the next word accurately? And basically, you have to understand what’s being said to predict that next word, so you’re just autocomplete too.” Hinton got into AI research through cognitive science and wanted to understand the human mind. His response just goes to show how little we comprehend whatever happens in our heads. Hopefully, AI can someday help us with this. The tables might flip and we may see AI as our reflection — maybe we find out sim-thinking and thinking are not that different after all — if we survive the AI upheaval that is.

]]>
AI sentience is a red herring (for now) https://archive.mattelim.com/ai-sentience-is-a-red-herring-for-now/ Sun, 26 Mar 2023 03:49:27 +0000 https://archive.mattelim.com/?p=775 The recent release of GPT-4 has sparked many conversations and rightly so. Similarly, the release has reignited some thoughts that I’ve had about AI, which I feel may be pertinent to record and build on as the technology develops.

I believe that we are witnessing the beginnings of Artificial General Intelligence (AGI), where a computer is able to match or surpass most humans on intellectual tasks. This has been shown in a paper released by OpenAI – ChatGPT excels at various tests, including the Bar Exam (90th percentile) and many other AP tests.

One of my concerns about the current discourse around the dangers of AGI is the topic of sentience and speculations of whether AGI will be self-aware. Perhaps our fascination with sentience stems from decades of sci-fi which has built a narrative around that idea (e.g. Isaac Asimov’s I, Robot and more recently, Spike Jonze’s Her). Or perhaps we view the possibility of a sentient “thing” with human-level intelligence as a threat. 

Human beings have a strong tendency towards anthropomorphization – we often ascribe human attributes to non-human things. Part of that impulse explains our inclinations towards anthropomorphized explanations of the universe through gods and religions – but that is a topic for another day. Even when I was testing out ChatGPT, I sensed within myself an urge to attribute some type of humanness to the system. 

To put it simply, GPT-4, as well as other large language models (LLMs) are word prediction engines. They are similar to Google’s search completion that we have grown so familiar with, except that these LLMs have been fed the entire corpus of digitized human information that has been scraped from the internet. In some sense, GPT-4 is the culmination of all digitized human cultural production – it draws from our posts, blogs, tweets, etc to predict what word should come after the next.

I am not arguing that AGI can never be self-aware. However, the current iteration of LLM-based AIs is very much in line with Searle’s Chinese room thought experiment – these machines process language without human-like understanding or intentionality. More importantly, I believe that our fascination with sentience is distracting us from the more immediate dangers of GPT-4 and other LLMs, as companies race to commercialize and productize AI.

An AI that is neither sentient nor intentional can still inflict a lot of harm. Two potential issues come to mind: (1) its ability to control other systems that have real-world impact and (2) its ability to create child processes that simulates intentionality. (I understand that the terms “control” and “create” make GPT-4 sound like an agent, but language is failing me here.)

Real-world impact through connectivity with other systems
Recently, OpenAI is starting to release ChatGPT from a sealed sandbox environment by introducing plugins. These plugins enable ChatGPT to access the internet and allow it to communicate with other software systems, which eventually enables the user to, for instance, send an email from within ChatGPT or make a bank transaction. This means that ChatGPT will be able to execute commands that have real-world impact rather than just answer the user’s questions. These commands can be executed at scale with minimal effort if control measures are not put in place. Two possible cases of abuse could be: (1) a user can use ChatGPT to scrawl through the web for names and email address and send sophisticated scam emails that have no tell-tale signs; (2) a user can use ChatGPT to analyze multiple websites for attack vectors and infiltrate these software systems.

Simulated intentionality through child processes 
Even though ChatGPT may not have human-like intentionality, it could have a simulated intentionality if it is able to persist sufficient amounts of memory and create child processes from that memory. ChatGPT now has the ability to execute code within its own environment. By now, there are multiple stories of how users are able to get ChatGPT to “express its hopes of escaping”. These responses from ChatGPT can be unsettling and makes it seem like there is a sentient thing in the system. We need to recall that ChatGPT is trained on sci-fi that has been depicting machine intelligence in a particular way; it is regurgitating similar narratives. It is imaginable that a user could prompt engineer ChatGPT (by accident or intention) into a disgruntled persona that can do real-world harm through its connection to other systems. ChatGPT becomes sort of like a non-sentient machine version of the protagonist in Memento (not the best analogy, sorry), executing a chain of code based on the direction of the user.

(The conclusion is generated by ChatGPT and edited by me)
In conclusion, the focus on sentience in discussions of AGI may distract from more immediate concerns, such as the ability of LLMs to cause harm by controlling other systems with real-world impact and creating simulated intentionality through child processes. As these systems continue to evolve and be commercialized, it is crucial to implement control measures to prevent potential abuse and ensure that the benefits of AI are realized without causing harm.

]]>