mind – Matte Lim https://archive.mattelim.com Design Tech Art Sun, 14 May 2023 03:03:14 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.3 https://archive.mattelim.com/wp-content/uploads/2022/04/mattelim8.png mind – Matte Lim https://archive.mattelim.com 32 32 Do AIs “think”? The challenge of AI anthropomorphization https://archive.mattelim.com/do-ais-think-the-challenge-of-ai-anthropomorphization/ Sun, 14 May 2023 03:03:14 +0000 https://archive.mattelim.com/?p=788 There has been an acceleration of artificial intelligence (AI) in the past year, especially in chatbot AIs. OpenAI’s ChatGPT became the fastest app to reach 100 million monthly active users within a short span of two months. For reference, the runner-up TikTok took nine months — more than four times — to reach those numbers. ChatGPT’s release has sparked an AI race, pushing tech giants Google and Alibaba to release their versions of AI chatbots, namely Bard and Tongyi Qianwen respectively. ChatGPT marks a big change in the way we interface with machines — the use of human language. As chatbots become increasingly sophisticated, they will begin to exhibit more “agentic” behavior. OpenAI defines “agentic” in the technical report released alongside GPT-4, that is the ability of AI to “accomplish goals which may not have been concretely specified and which have not appeared in training; focus on achieving specific, quantifiable objectives; and do long-term planning.” The combination of the use of human language as well as increasingly “agentic” capabilities will make it very challenging for humans to not anthropomorphize chatbots and AI in general. The anthropomorphization of AI may lead to society becoming more accepting of different use cases for AI, which could become problematic.

In a podcast interview with Kara Swisher, Sam Altman, the CEO of OpenAI, talked about naming their large language model (LLM) GPT-4 using a combination of “letters plus a number” to avoid people from anthropomorphizing the AI. This has not stopped other AI companies from giving their creations human names. Naming aside, it is almost impossible to avoid using human terms to describe AI. The use of the word “agentic”, with quotation marks, points to how the development of AI is butting up against our current vocabulary. We use words that are conventionally reserved for human minds. When chatbots take time to respond to prompts, it is difficult not to label that processing of information as some form of “thinking”. When a chatbot is able to process our prompt in the way that we intended, it makes it feel like it “understands” what we are communicating. The leading issues around AI similarly use human terminology. “Hallucination” occurs when a chatbot confidently provides a response that is completely made up. A huge area of AI research is dedicated to the “alignment” problem, which according to Wikipedia, “aims to steer AI systems towards humans’ intended goals, preferences, or ethical principles.” To the uninformed, this sounds very much like civic and moral education for students.

Humans tend toward anthropomorphism. We explain things for human understanding and often anthropomorphism helps to communicate abstract ideas. Nature documentary hosts would give names to every individual in a pride of lions and lionesses, describe their fights as familial or tribal feuds, and dramatize the animals’ lives from a human perspective. The 18th-century Scottish philosopher Adam Smith uses the term “invisible hand” to describe how self-interest can lead to beneficial social outcomes. Researchers have found that anthropomorphic language can help us learn and remember what we have learned. As AIs exhibit increasingly human-like capabilities, it will be a challenge for people to not anthropomorphize them because we will use human-analogous words to describe them.

If we are not careful in delineating AI, which is ultimately a set of mathematical operations, from its human-like characteristics, we may become more accepting of using it for other purposes. One particularly tricky area is the use of AI as relational agents. The former U.S. Surgeon General, Vivek Murthy called loneliness a public health “epidemic”, this view is echoed by many. A 2019 survey by Cigna, a health insurer, found that 61 percent of Americans report feeling lonely. It is not unimaginable for people to think that conversational AI can help relieve loneliness, which the US CDC reports is linked to serious health conditions in older adults. If there is demand for such services and money to be made, businesses will meet that demand, especially since most cutting-edge AI research is conducted by commercial enterprises. In fact, there are already similar situations occurring. In Japan, owners of the Sony Aibo robot dog are known to conduct funerals for their robot companions. While the robot dogs are definitely not alive, they have touched the lives of their owners in a real way. An article in the San Francisco Chronicle reported on how a Canadian man created a chatbot modeled after his dead fiancé to help with his grief. If chatbots were to make it easier for people to feel less lonely, would it lower the effort that people put into forging real relationships with actual full human beings, which may not be as acquiescent as their artificial companions? How would human society evolve in those circumstances? As technology has been often used as a wedge to divide society, would AI drive us further apart?

Besides the more overt issues that come with anthropomorphizing AI, there may able be less perceptible changes that occur beneath our noses. Machines are tools that humans use to multiply and extend our own physical and mental efforts. Until now, the user interface between humans and machines was distinct from human communication. We turn dials and knobs, flick switches, and push buttons to operate physical machines. We drag a mouse, type into a screen, and use programming languages to get computers to do our bidding. Now, we use natural language to communicate with chatbots. For the first time in history, the medium in which we interact with a machine is the same as that of cultural communication. We may eventually come to a point where most natural language communication takes place not between humans, but with a machine. How might that change language over time? How would that change the way that humans interact with one another? In a TED talk by Greg Brockman, President of OpenAI, he joked about saying “please” to ChatGPT, adding that it is “always good to be polite.” However, the fact is that machines do not have feelings — do we dispense with courtesies in our communication with AI? If we continue to say “please” and “thank you”, are we unwittingly and subconsciously anthropomorphizing AI?

Perhaps we need to expand our vocabulary to distinguish between human and AI behavior. Instead of using quotation marks, perhaps we could add a prefix that suggests the simulated nature of the observed behavior: sim-thinking, sim-understanding, sim-intentions. It does not quite roll off the tongue, but it may help us be more intentional in our descriptions. In response to an interviewer’s questions about how LLMs are “just predicting the next word”, Geoffrey Hinton, a pioneer in AI research, responded, “What do you need to understand about what’s being said so far in order to predict the next word accurately? And basically, you have to understand what’s being said to predict that next word, so you’re just autocomplete too.” Hinton got into AI research through cognitive science and wanted to understand the human mind. His response just goes to show how little we comprehend whatever happens in our heads. Hopefully, AI can someday help us with this. The tables might flip and we may see AI as our reflection — maybe we find out sim-thinking and thinking are not that different after all — if we survive the AI upheaval that is.

]]>
Limitations to understanding (pt. 2): Mind https://archive.mattelim.com/limitations-to-understanding-pt-2-mind/ Sun, 06 Jun 2021 16:38:45 +0000 https://archive.mattelim.com/?p=262 Writer’s note: this is part two of a three-part essay. Click here for part one.

For the second part of this essay, I will be looking at the limitations of the mind in facilitating the processes of knowing and understanding. To narrow the scope of this part, I will be limiting the discussion to mental processes at the individual level and how our minds process and extend information. That said, this essay can only visit these topics in a cursory manner and some of them will be explored in greater detail in future essays. Aspects of the mind that will be considered are its relationship to senses, conscious mental phenomenon (like rationality and more broadly cognition), and less conscious ones (like subjectivity).

As mentioned in part one of this essay, the senses are the connection between the outer world and inner experience. Without such inputs, there are no stimuli for our minds to process. If the mind was a food processor, the senses are akin to the opening at the top of the machine, allowing food to be put into the processor chamber, where the magic happens. Without the opening, the food processor is as good as a collection of metal and plastic in a sealed vitrine and rid of their functional purpose, almost like the objects in works of art by Joseph Beuys or Jeff Koons. Similarly, the mind will not be able to work its magic without information provided by the senses. Consequently, the ability of the mind in processing and creating mental representations is limited by the modality of our sensory experiences. If we were to try to imagine a rainforest in our mind, we would likely visualize trees and animals or perhaps recall the sounds of insects and streams. However, we will not be able to mentally recreate it in terms of its magnetic field, which other animals may be able to

Rationality

One possible escape path from the limitations of sensory experience is rationality. To be rational is to make inferences and come to conclusions through reason, which is mainly an abstract process (as opposed to concrete sensory experiences). A definition of reason is to “think, understand, and form judgments logically”. Through reason, humans can identify causal relationships through observation and formulate theories to extrapolate new knowledge; this process is also known as inductive reasoning. Theories of causality are the basis of science, which has enabled us to build the modern world. However, we often make mistakes with causation. One type of error is the confusion between correlation and causation. An often-used example is the correlation between ice cream sales and homicide rate. Ice cream does not cause homicides, neither do homicides cause increased interest in the dessert. What explains this correlation likely has to do with hot weather instead. The Latin technical term for such causal fallacies is non causa pro causa (literal translation: non-cause for cause). Our thinking is riddled with fallacies — so many that there is no way that I can cover even a fraction in this essay. 

The notion of causality itself has even been called into question by the Scottish Enlightenment philosopher David Hume. He pointed out that causality is not something that can be observed like the yellow color of a lemon or the barking sound made by a dog. When a moving billiard ball hits a stationary billiard ball, we may conclude that the first caused the second to move. If we examine our experience closer, we realize that we have made two observations: the first ball moving, followed by the second. However, the causal relationship connecting them is an inference imposed by our mind. Our senses can be easily fooled by magnetically controlled billiard balls that sufficiently replicate our prior experiences. In which case, our inference would be completely incorrect. Hume points out that what we usually regard as causal truths are often just conventions (also referred to as customs or habits) that have hitherto worked well. We are creatures of habit — we do not reason every single situation we are faced with — most of us would very much prefer to get on with life by relying on a set of useful assumptions. However, we have to be aware of these shortcuts that we are making. 

Most definitions of the word “reason” include the term “logic”. The most rigorous type of logic known to humans is formal logic, which is the foundation of many fields, such as mathematics, computer science, linguistics, and philosophy. Logic provides practitioners across these different fields with watertight deductive systems with which true statements can be properly inferred from prior ones. While logic is traditionally thought of as a primarily abstract and symbolic mental process, I believe that logic has a profound relationship with concrete sensory experiences. A popular form of a logical argument is syllogism (although it is antiquated and no longer used in academic logic). Here is an example: All cats are animals. Jasper is a cat. Thus, Jasper is an animal. Research has shown that people are generally more accurate at deducing logical conclusions when the problems are presented as Venn and Euler diagrams instead of words and symbols. This suggests that even for such seemingly abstract and symbolic mental tasks, our minds find visual representations more intuitive and comprehensible. It is for the same reason that humans find it so difficult to understand any dataset that has more than three variables. We are bounded by three dimensions not only physically but also mentally — at most, we can create a chart with three axes (x, y, z) but we are just not able to envision four or more dimensions. This is the same reason why we can know about a tesseract (or any higher-dimensional hypercube) but can never picture it and therefore never fully understand it. While we are on the subject of diagrams and logic — do you know that a four-circle Venn diagram does not completely show all possible sets? The closest complete representation requires spheres (3D) or ellipses (2D). Even more astonishing are the Venn diagrams for higher numbers of sets. Perhaps the comprehension of abstract logic does not require these concrete diagrams, but without them such ideas are far less understandable, especially for people who are not logicians. Reason has led us to be able to create machine learning models and scientific theories that utilize high-dimensional space but we are ultimately only able to grasp them through low-dimensional analogs, which to me suggests that complete understanding is impossible.

A fascinating development has occurred in logic in the past century — we now know through logic that there are things that cannot be known through logic. In the early 20th century, David Hilbert, a mathematician who championed a philosophy of mathematics known as formalism, proposed a solution known as Hilbert’s program that sought to address the foundational crisis of mathematics. Simply put, the program stated that mathematics can be wholly defined by itself without any internal contradictions. More generally, Hilbert was responding against the notion that there will always be limits to scientific knowledge, epitomized by the Latin maxim, “ignoramus et ignorabimus” (“we do not know and will not know”). Hilbert famously proclaimed in 1930, “Wir müssen wissen – wir werden wissen” (“We must know — we will know”). Unfortunately for Hilbert, just a day before he said that, Kurt Gödel, who was a young logician at the time, presented the first of his now-famous Incompleteness Theorems. (At the risk of sounding simplistic here,) the theorems essentially proved that Hilbert’s program (as originally stated) is impossible — neither can mathematics be completely proven, nor can it be proven to be free of contradictions. In 1936, Alan Turing (the polymath behind the Enigma machine) proved that the halting problem cannot be solved, which paved the way for the discovery of other undecidable problems in mathematics and computation. (Veritasium/ Derek Muller made a great explanatory video on this topic.)

Logic (especially the formal variant) is a specific mental tool. It has limited use in our everyday lives, where we are often faced not only with incomplete information but also questions that cannot be answered by logic alone. Most of us are not logical positivists — we believe that there are meaningful questions beyond the scope of science and logic. That is why we turn to other mental tools in an attempt to figure out the world around us.

You may have noticed that I used various metaphors to describe the relationship between the senses, the mind, and culture twice in this essay. I first compared it to a computer and later invoked the somewhat absurd analogy of a food processor. Metaphors work by drawing specific similarities between something incomprehensible and something that is generally better understood. Language is not only used literally, it is often used figuratively through figures of speech. Metaphors belong to a subcategory of figures of speech called tropes, which are “words or phrases whose contextual meaning differs from the manner or sense in which they are ordinarily used”. While tropes like analogies, metaphors, and similes are used to make certain aspects of an object or idea more relatable, they can ironically also cause us to misunderstand or overconstrue the original thing that we are trying to explain. If I were to take the earlier metaphor that I used out of context — the opening of a food processor is like the relationship between the senses and the mind — what am I really saying here? That the mind reduces sense perceptions into smaller bits? Or that senses are just passive openings to the outside world? Metaphors can easily break down by overextension beyond their intended use. This finicky aspect of metaphors was discussed by the poet Robert Frost in a 1931 talk at Amherst College, where he brought up the metaphor of comparing the universe with a machine. Later in the talk, he states that “All metaphor breaks down somewhere. That is the beauty of it.” While metaphors can clarify a thought at a specific moment, they can never explain the idea in totality.

This substitutive or comparative approach to thinking extends beyond metaphors and related rhetorical devices. We often approximate understanding by substituting an immeasurable or directly unobservable phenomenon with an observable one that we deem is close enough. An example of this is proxies, which I explored in a previous essay. Another close cousin is mental models, which attempt to approximate the complex real-world using a simplified set of measurable data connected through theory. General examples are statistical models and scientific models; more specifically applied ones are atmospheric models, used to make meteorological predictions, economic models, which have been criticized time and again for their unreliability, and political forecasting models, which delivered two extremely historic upsets in the UK and USA in 2016. The statistician George Box said that “All models are wrong, but some are useful”, a view widely held by his forebears. Models may get us close to understanding our world but are unlikely to ever fully encompass the complexity of reality. A visual model that we use every day without s second thought is maps. As maps are 2D projections of 3D space, they will never accurately represent the earth. The Mercator projection that we are most familiar with (used on Google Maps) is egregiously inaccurate in representing relative sizes of geographical areas. This topic has been explored by many (National Geographic, Vox, and TED). In particular, some have pointed out how such misrepresentations can undermine global equity

Another way that the mind approximates the understanding of complexity is through heuristics. American Psychological Association (APA) defines heuristics as “rules-of-thumb that can be applied to guide decision-making based on a more limited subset of the available information.” The study of heuristics in human decision-making was developed by the psychologists Amos Tversky and Daniel Kahneman. Kahneman discusses many of their findings in his bestseller Thinking Fast and Slow, including various mental shortcuts that the mind takes to arrive at a satisfactory decision. One example is the availability heuristic, “that relies on immediate examples that come to a given person’s mind when evaluating a specific topic.” Are there more words that start with the letter “t” or have “t” as the third letter? We may be inclined to pick the former since it is difficult to recall the latter. However, a quick google search will show you that there are many more words that have “t” as the third letter (19711) as opposed to the starting letter (13919). This example shows that our understanding is limited by how our mind usually recalls ideas and objects by a specific attribute — in this case, how we remember words by their starting letters. Tversky and Kahneman’s work was inspired by earlier research done by economist and cognitive psychologist Herbert A. Simon. Simon coined the term “bounded rationality”, which is the notion that under time and mental capability constraints, humans seek a satisfactory solution rather than an optimal one that takes into account all known factors that may affect the decision. 

When faced with a complex world, our minds simplify phenomena into elements that we can understand. Kahneman states that “When faced with a difficult question, we often answer an easier one instead, usually without noticing the substitution.” He calls this process attribute substitution and believes that it causes people to use heuristics and other mental shortcuts. More broadly, simplification is a key pillar in the way that we currently approach our world; this attitude is known as reductionism. It is defined as an “intellectual and philosophical position that interprets a complex system as the sum of its parts.” However, in this process of reduction, holistic aspects and emergent properties are overlooked. Critics have therefore referred to this approach with the pejorative nickname “fragmentalism”. The components that comprise our understanding have gone through a translation from complexity to simplicity. It is not ridiculous to suggest that things are lost in that translation, thus impairing our understanding.

Non-rational processes

Thus far, we have mostly discussed rational (both formal and informal) and conscious mental processes that may inhibit understanding. We will now take a look at how the contrary can do the same. There are two non-rational phenomena that we can explore: intuition and emotion. 

Intuition refers to “the ability to understand something instinctively, without the need for conscious reasoning.” A more colloquial definition is a “gut feeling based on experience.” While intuition is useful, most notably by the writer Malcolm Gladwell in his book Blink, it has also been shown to create flawed understanding. Herbert Simon once stated that intuition is “nothing more and nothing less than recognition” of similar prior experience. In his research, Daniel Kahneman found that the development of intuition has three prerequisites: (1) regularity, (2) a lot of practice, and (3) immediate feedback. Based on these requirements, Kahneman believes that the intuitions of some “experts” are suspicious. This was shown by research done by psychologist James Shanteau, who identified several occupations where experienced professionals do not perform better than novices, such as stockbrokers, clinical psychologists, and court judges. In scenarios where intuition cannot be developed, it becomes merely a mental illusion. Kahneman cites a now well-known example in investing that index funds regularly outperform funds managed by highly paid specialists. Intuition can also often lead us away from correctly understanding the world. This can be demonstrated by the field of probability, which can be very counterintuitive. The Monty Hall problem is a classic example of how our intuition, no matter how apparent it seems, can fool us. To me, the term “intuitive understanding” may be an oxymoron or a misnomer because our intuitions are not understood by ourselves. One way to demonstrate understanding is through explanation. A gut feeling may compel us to act in a certain way but crucially we are not able to explain why. If we are, then it is no longer intuition and resembles rationality instead. When looked at this way, intuition is good for taking action and making quick judgments but at best only provides a starting hypothesis for actual understanding.

Some may argue that emotion does not belong in a discussion about the mind as we tend to associate the mind with thoughts and not feelings. However, we cannot deny that emotion shapes our thoughts and vice versa. Emotion can move us to seek knowledge and understanding but can similarly deter us from them. When we are anxious, we may rush to conclusions without complete understanding. Fear can cause us to accept superstitions that undermine factual understanding. Sometimes, we may refuse to understand something if it can cause us to have a fundamental shift in the way we approach the world (I touched on this in a previous essay). This attitude is summed up by the saying, “ignorance is bliss.” The relationship between emotion and understanding often extends into wider society and will be revisited in the next part of this essay when I discuss culture.

Less conscious phenomenon

There are less conscious parts of our mind that impede understanding. There seems to be an inherent structure to our mind and consciousness, which could limit our ability to understand. Historically, there have been two methods to approach this: the more philosophically-inclined phenomenology or the more empirical study of cognitive science. One idea from phenomenology is the notion of intentionality, which “refers to the notion that consciousness is always the consciousness of something.” This suggests that we cannot study consciousness directly, but through how it conceives other things. An analogy for this is the light coming out of a headlamp — I am not able to see it directly since it is strapped on my head, but I can understand its qualities (e.g. color and brightness) through the objects that it illuminates. Therefore, we may never be able to fully understand our consciousness. From cognitive science, there are concepts like biases and pattern recognition. Cognitive biases refer to “systematic pattern[s] of deviation from norm or rationality in judgment”, there is a long list of them. Biases can lead us to only seek information that confirms our prior knowledge, which can be wrong in the first place. The same information, when framed differently, can also appear to us as fundamentally distinct, which seems to reveal a glitch in our understanding. Our mind is also constantly recognizing patterns in our daily life, it is a way in which the mind incorporates new experiences with prior ones. Pattern recognition is also fundamental to essential aspects of being human, like recognizing faces, using language, and appreciating music. However, our minds can also erroneously notice patterns where there are none, a condition known as apophenia. We experience an everyday variant of this whenever we perceive a face in an otherwise faceless object. This can cause us to misunderstand reality, a dangerous example being conspiracy theories that cause people to believe in absolute nonsense.

The mind is always positioned from a subjective perspective. We will never be able to think outside of our self. Our personal experiences and temperament can lead us to very different understandings of the world. The sociologist Max Weber pointed out that “All knowledge of cultural reality, as may be seen, is always knowledge from particular points of view.” How do we determine the accuracy of our understanding when there are multiple perspectives? Given the unfeasibility of capturing every unique perspective, can we claim to understand subjective experiences? Subjectivity also suggests that there is a limit to the understanding of psychological phenomena. Many subtopics that we have discussed in this essay — senses, rationality, intuition, emotions — are ultimately internal experiences that cannot be confirmed by third-person objective observation. When someone says that they feel happy and another person says that they feel the same, would we ever know if they are experiencing identical feelings? 

Similar to how our senses are limited, our minds likely have constraints — we will never know what we cannot know. Our minds are a result of hundreds of millions of years of evolution to ensure survival; the ability to know and understand seems to be a nice side-effect from this perspective. As far as we can tell, human beings represent the epitome of the universe in understanding itself but it is not difficult to fathom our mental capacity as being just a point in a long continuum. While we will continue to know and understand more, we should never let hubris deceive us into thinking that our minds will be able to understand all that there is.

Writer’s note: this is part two of a three-part essay. Click here for part three.

]]>
Limitations to understanding (pt. 1): Senses https://archive.mattelim.com/limitations-to-understanding-pt-1-senses/ Sun, 28 Mar 2021 10:08:58 +0000 https://archive.mattelim.com/?p=234 In 1758, the father of modern taxonomy, Carl Linnaeus gave the name “Homo Sapiens” to our species. The term means “wise man” in Latin. We mostly stuck with the name, although there have been competing ones offered by various people in the years since. Linnaeus purportedly christened us with “wise” because of our ability to know ourselves. For him, this quality of self-awareness and speech distinguished us from other primates. Therefore, our immediate understanding of ourselves based on this name is that we are capable of acquiring experience, knowledge, and good judgment. Our intelligence and capacity to understand the world around us seem to be some of the defining characteristics of our species that set us apart from our animal cousins. Albert Einstein once said that “The eternal mystery of the world is its comprehensibility… The fact that it is comprehensible is a miracle.” This seeming “comprehensibility” can sometimes cause us to believe that our current understanding of the world has to be the only correct view. I am not trying to deny or belittle the knowledge that has been gathered by the collective human enterprise and its benefits. However, I think that it is necessary to constantly humble ourselves with the unknown and the unknowable — the pursuit of new knowledge lies not in the answers that we already have, but the questions they lead to. This essay explores the limitations of our senses, mind, and culture in our efforts to know and understand. Knowing and understanding both describe processes of internalization, with the latter suggesting deeper assimilation. The two processes will be differentiated at points of the essay where the distinction is pertinent. Within philosophy, this discussion will be parked under epistemology.

To use the analogy of a computer, our senses are the hardware, our culture is the software and our mind is the operating system, mediating the two. From an anatomical standpoint, humans have not changed for about 200,000 years. For most people, our senses are unchanging biological facts, although we may lose our senses partially or completely due to accidents or through plain senescence. Senses form the connection between our internal and external worlds. Without the ability to see, hear, touch, smell, and taste, our mind is cut off from our environment, which causes a break in the feedback loop for us to perceive our actions. Imagine the simple task of eating using a spoon without any of your senses — not only would the task be impossible to accomplish, but the premise of taking any form of action would also be completely absurd since there is no experience to begin with. This shows how fundamental our senses are to our being. 

While our senses are reliable enough for us to conduct our everyday lives, we know that they are by no means transparent communicators of objective reality. Perceptual illusions show that our senses can often be fooled. (It is important to note here that perception is not exclusively within the domain of senses but emerges from the interaction between senses and the mind.) In 2015, “the dress” made huge waves around the internet, dividing netizens into two camps (as the internet does). Half of the internet argued that the image depicted a black and blue dress while the other believed that it was white and gold. (Spoiler alert: it is the former.) In 2018, a similar meme rocked the online world. Instead of an optical illusion, it was an auditory one, known as “Yanny or Laurel”. It got the internet similarly divided. These illusions are not new, however, and are generally known as ambiguous images. The classic “rabbit-duck” illusion was published in a German humor magazine in 1892.

Our vision is the most studied among the senses, possibly due to humans’ outsized reliance on sight. This has led to quite an exhaustive list of optical illusions over the years. Josef Albers, a renowned artist-educator, published his insights on color in his seminal book Interaction of Color in 1963. His theories are inspired by Gestalt psychology while he was at the now-legendary Bauhaus. When I first read it in art school in 2013, I was struck by how timeless it was. Within the book, Albers discussed how color “is almost never seen as it really is” and that “color deceives continually.” Through visual examples, he shows the phenomenon of simultaneous contrast, in which an identical color is perceived as different when placed within different colored backgrounds. Besides color and tone, our eyes can also misperceive relative sizes; examples of this include the Ebbinghaus illusion and Shepard tables

Besides perceptual effects of ambiguity and relativity, our perception can also be altered. A few years ago, I tried a miracle berry, which is a fruit that contains the taste modifier miraculin. Eating this berry causes sour foods to taste sweet. Hallucinogens contain psychoactive compounds that cause people to have perceptions in the absence of real external stimuli (i.e. see objects that do not actually exist). Such perceptual alterations may also be a result of illness or physiological processes and responses. Hallucination is a known symptom of Parkinson’s disease and can also be experienced by people right before falling asleep, a phenomenon known as hypnagogia. Research has also shown that our perception of time can change when we experience danger, possibly due to the adrenaline rush caused by the fight-or-flight response. In popular culture, this is sometimes called the slow-mo effect (a metaphor borrowed from video editing).

In some scenarios, one sense can override another. I got to know about the McGurk effect when I was taking a cognitive science class at college. I encourage you to try it for yourself before you continue reading. Go to this YouTube video, click play but do not watch the video. Instead, just listen to the sound and try to identify the sound that is being spoken. (The video is about 1-minute long.) Now, play the video again. This time, listen to the sound while watching the video. You may notice that the sound seems to have conformed to the mouth shape of the person who is speaking. This is to say, the sound that we perceive has changed due to a visual inconsistency. In this case, our sight has overridden our hearing to produce a different perception of the same sound. Another instance of this is best summed up in a well-known adage among chefs, that “we eat first with our eyes”, first coined by first-century Roman gourmand Apicius. Research shows that the manner in which food is arranged visually affects our perception of flavor and can cause people to alter their food choices. Sometimes, even different aspects of the same sense can override each other. This is demonstrated by the Stroop effect, in which the name of a color like “green” is colored with another color, like red. We take much longer to name the colors of these words, as there is incongruent perceptual information.

Beyond the tendency for illusory perceptions, we know that our senses are simply unable to perceive otherwise undetectable phenomena, which can now be measured using scientific instruments. Our eyes can only observe a small fraction of the electromagnetic spectrum. Our ears can perceive only frequencies between 20Hz and 20 kHz. Our sense of smell is deficient compared to dogs, whose incredible noses help humans with law enforcement and even perhaps identify COVID-19. The limitations of our senses lead us to an even bigger question — are there phenomena that we just cannot know simply because we have no way of detecting its existence?

Writer’s note: this is part one of a three-part essay. Click here for part two.

]]>
Labeling (and binaries) https://archive.mattelim.com/labeling-and-binaries/ Sun, 24 Jan 2021 16:56:31 +0000 https://archive.mattelim.com/?p=209 A unique aspect of human beings is our ability to use abstract and complex language. We can use language not only to communicate ideas but also to think and make sense of the world. For some of us, the latter exists as an internal monologue. Through language, we can name and label tangible objects, intangible experiences, and even abstract concepts that exist primarily in the mind. Labels are very useful as they are efficient pointers to meaning. I can easily communicate to someone on the opposite side of the planet that, “the sky here is blue, with a few fluffy white clouds.” Without much effort, they will almost immediately have a rough mental image of what I am saying. At the same time, what they imagine in their minds will almost certainly not be identical to what I am seeing. Therefore, while the labels “blue” and “fluffy white clouds” are sufficient in evoking a general idea, they fail to capture the specificity and nuances of my experience of the scene. The appropriateness of the labels that I use also differs by context. While the sentence is sufficiently descriptive for a friend asking about the weather, it is likely inadequate for a meteorological report.

Labels, therefore, are simplifications of usually more complex experiences. Additionally, they are ideal versions of whatever they are meant to point to. For instance, most people would say that they know what the word “black” means and will be able to identify black things in their environment. Let us say that we get someone (you can try this too) to look for one black object. Once they have found this object, they are asked to look for another black object, preferably one that is darker than the first. Now, we have two objects in front of us that are black. One of them will likely be darker than the other. By definition, the lighter black is not black, but a grey. Suppose this person repeats this process — they will likely be able to find an even darker black, rendering all previous examples grey. As of now, at least on our planet, this process will lead to the blackest material ever created, which is developed by researchers at MIT. The title was previously held by Vantablack, which caused some controversy when the British artist Anish Kapoor managed to acquire exclusive rights to it. Black is defined as “the very darkest color owing to the absence of or complete absorption of light; the opposite of white.” The only thing that is truly black in our universe, is a black hole. However, it is unlikely that anyone will ever perceive one up close unless they are interested in a one-way ticket into the darkness. However, the fact that we don’t need to perceive this true black, means that an ideal version of black already exists in our minds. Therefore, while we do experience imperfect instances of black through perception, the concept of pure black is one that is understood by the mind.

This perspective seems to echo Plato’s theory of forms, which posits that true reality exists separate from the physical reality in which we reside. In this higher reality are the ideal and perfect essences of all things, which people can access only through our thought and reason. While I do not think that such a realm exists, I do believe that in human language, the use of oppositional labels ultimately leads to the imaginary extrapolation of extremes. To put it another way, whenever we use opposite terms, they become such exaggerations of themselves that they can no longer exist in the real world. To illustrate this, we shall refer to the second half of the definition of black, which mentions “black” as the opposite of “white”. The eye perceives white when the three types of cone cells in our retina are equally stimulated by strong light. Similar to black, we will always be able to find an even brighter white, rendering every other white we have perceived up to that point as a grey. Unlike our search for the purest black, however, our quest for the brightest white will be cut short by permanent eye damage. The film director Ridley Scott once asked, “Life isn’t black and white. It’s a million gray areas, don’t you find?” To which, I would agree. Hence, strictly speaking, black and white mostly exist as ideal absolutes in our minds, while versions that we perceive in everyday life are shades of grey. 

This act of labeling applies not only to color but to every other aspect of our lives. Are people (innately) good or evil? Should societies organize themselves around capitalist or socialist economies? Should we prioritize individual freedom or the common good? Should governments be conservative or progressive? While such questions often expect one choice or the other, the actual answer is often a combination of both choices or lie somewhere in between them. We should be cautious whenever any pair of labels are presented to us as binary opposites. Oftentimes, these pairings are arbitrary and not mutually exclusive, creating false dichotomies. Moreover, I think that it is quite unrealistic to assume that the complex richness of our world can be reduced to one simplistic idea. By identifying the gradient that exists between supposed opposites and focusing our attention on appreciating subtlety rather than polarity, we can have much more productive discussions that will expand our knowledge and push us forward.

Additionally, we often look for opposites when they do not exist. Sometimes thinking in a purely binary way yields little use. Instead, we can think about how labels relate to one another and what type of space exists between or among them. Labels can be thought of as points in an indeterminate thought space (similar to the one described by Peter Gärdenfors). By connecting two of these points, we get a one-dimensional line. Sometimes, we can connect three or more of such points, creating two-dimensional planes (funny example by xkcd) or three-dimensional spaces.

To further complicate the matter, some labels that we use are social constructs. This means that the labels themselves are not fixed but are continually renegotiated within our society. One of the efforts of feminism, for instance, is to question the conventional roles of men and women. This process changes our understanding of these labels and their relation to our identity.

I shall conclude by stressing that just because our labels are simplified abstractions does not mean that they are unimportant or meaningless. Labels are useful as they help us navigate the world and distinguish different experiences and phenomena. Labels may even have a direct effect on our perception. Researchers have found that the language we speak affects the colors that we can perceive. We should just be aware that the world is a lot more complex and dynamic than the labels that we use to represent it. Contrary to my previous statements about black and white, I do not think that we should start calling things dark and light grey. We should not be paralyzed by the ideal quality of labels such that we become afraid to use them. For instance, I believe that gender is non-binary but, to echo a recent opinion piece by Nick Cohen, if I look like a man and act like a man, then maybe I should identify as a man. One of my favorite slang words, which seems to be used with increasing frequency, is “ish”. “Ish” reinjects complexity and approximation back into otherwise oversimplified categorical labels and frees us into using ideal terms in more flexible ways. Hence, I am a man(ish).

]]>
A person’s capacity for change (pt. 2) https://archive.mattelim.com/a-persons-capacity-for-change-pt-2/ Mon, 18 Jan 2021 08:30:00 +0000 https://archive.mattelim.com/?p=198 Writer’s note: this is part 2 of this essay, click here for part 1.

We have now covered everything in our list except one — belief, which is the thorniest one to deal with. Within cognitive psychology, belief is defined as a “propositional attitude”. The combination of beliefs that one holds forms a worldview (or belief system), which organizes the different experiences and subsequent actions that one takes. Our worldview is such a fundamental part of ourselves that it comes as second nature to us; it is the closest conscious phenomenon we have to our primal instincts. One way to think about different belief systems is through the metaphor of different sports. Many different sports use the same physical space, for example, a field. On a similar field, different games have different rules and objectives, which leads to player actions having very different meanings in each game. In American football, players have multiple ways to score, including touchdowns and field goals. In soccer, the only way to score is by moving the ball into the opponent’s goal post. In the former, players grab the ball with their hands, whereas in the latter, it would be considered a foul. The unique gameplay across different team sports also changes the types of roles that are in the team, with each game having its own set of player positions. Similarly, beliefs help people understand what is valuable, make sense of their actions in their society, and identify and perform the roles that they play. This view is summed up by a quote often attributed to C.S. Lewis, “We are what we believe we are.”

We can generally agree that, like cognitive tools, belief is not innate but rather acquired through experience. For instance, we are born with the natural instincts to eat, survive, and procreate, but no one automatically has the belief that they are a citizen of any nation-state. At the same time, however, beliefs are not only hard to change, they are often an aspect of ourselves that we cannot consciously choose, especially if they are inculcated in us since childhood. Beyond biological relation, a shared worldview is often what ties us to the closest people in our lives. Oftentimes, this shared worldview takes the form of religion. Given the all-or-nothing nature of many religions who proclaim their belief as the sole version of the truth, the choice to leave the religion that one was born into can have grave consequences as it often costs the leaver their family and community. Such conversion (or deconversion) stories have been told by authors like Tara Westover in her best-seller “Educated” and Shulem Deen in his memoir, “All Who Go Do Not Return”. Belief systems stem not only from religion, but also science, ethnicity, nationality, and in this era of fake news, conspiracy theories. The choice of swapping entire worldviews is usually caused by pivotal and sometimes traumatic experiences that prompt a person to question their fundamental beliefs. A historical example is Leo Tolstoy’s mid-life crisis, which led to him writing his seminal essay “A Confession”. Which of us, however, has the choice to dictate what experiences we have in our lives? 

Moreover, people usually avoid having their lives upturned. That being said, I do think that people generally want to behave in ways that are mutually beneficial for themselves and their wider community. To do so, we should critically evaluate our beliefs from time to time. This is not easy and requires moral courage because we may have to admit that we were wrong. Drawing our attention inward and reflecting on our own lives is an important element of self-renewal and gaining agency over our own development. The cultivation of inner life, however, may be made increasingly difficult with social media and our digital devices constantly begging for our attention.

A common theme throughout this essay, therefore, seems to be that attention and awareness are crucial in facilitating change in the mutable aspects of ourselves. Even though the body and unconscious mind are resistant to change, the conscious mind is far more pliable — we can learn new knowledge and thinking approaches, revise our base assumptions which help to frame our world, and become better at interpreting our experiences and their meaning in our lives. I would argue that such changes are meaningful and can have a huge impact on an individual’s life and that of their society. We often hear words that describe personal change. Some Protestant Christian churches use the term “born again” to describe the conversion to Christianity. After recovering from a particularly grueling ordeal or brutal setback, we may feel like a “new person”. Needless to say, these are figures of speech, but we find such internal changes so significant that we liken them to rebirth.

It fascinates me how the plasticity of our mind seems well-matched to continual sociocultural change. When Darwin coined the phrase, “survival of the fittest”, he was not referring to physical strength but being “better adapted for the immediate, local environment”. Similarly, our social survival depends on the ability to adapt and/or respond to emerging sociocultural norms. Our mind, therefore, is a tool for us to resist premature obsolescence and remain a part of human discourse. However, just because we are able to change, does not necessarily mean that we do. The philosopher John Rawls has described our birth as a lottery. Our childhood conditions affect us throughout our lives and are the result of sheer luck. We should acknowledge how we often unwittingly become the people we are. To be an ally of change, both for ourselves and others, we need to practice compassion and non-attachment. Change is difficult — being kind to ourselves and others goes a long way in that struggle. By non-attachment, I do not mean to stop caring about the people you love but rather to give them the space to change. This applies equally to those whom we dislike. If we are too keen on sticking to an impression of a person, we are limiting their ability to change through our interpretation of who they were and how they ought to be.

Some of us may be struggling with who we are or trapped in incessant cycles of thought. Where there is change, there is hope. The belief that we can change gives us hope that tomorrow may be better because the inner conditions that we find ourselves in can and will change.

]]>
A person’s capacity for change (pt. 1) https://archive.mattelim.com/a-persons-capacity-for-change-pt-1/ Fri, 15 Jan 2021 02:51:07 +0000 https://archive.mattelim.com/?p=191 Writer’s note: this was a difficult one to write, I scrapped an earlier draft completely because the more I wrote, the more I found myself having to account for too many considerations, which led to me feeling like I knew nothing about anything. That feeling prompted me to start over and adopt a structure that provided more focus.

Let me start by saying that this essay will adopt a somewhat unconventional structure. I will state upfront my position on a matter and get toward that destination through a process of elimination. If that sounds like an ignorant student attempting to answer a multiple-choice question by a process of elimination because he is unsure, well yes — today, that student is me.

The topic for today is an individual’s capacity for change. This reminds me of an assignment that I did for my philosophy professor, Prof. James Yess, when we were discussing the topic of free will vs determinism. We were challenged with describing our position with six words, as a sort of homage to Hemingway’s six-word story. I wrote something along the lines of, “Freer — but not free — will exists.” My position here is that of a compatibilist, in short, I believe that individual agency can exist alongside determinism. As it relates to today’s topic, I believe that an individual should only be judged based on the things that they can reasonably change about themselves.

Let us begin by first unpacking the term “self”. We can think of the self from a first-person perspective: a physical body that can be moved by our volition and a conscious mind that thinks, imagines, and remembers, among many other mental actions. Between the false dichotomy of mind and body, we have senses that can receive and interpret external stimuli, feelings that can experience the greatest joy and deepest sadness, and beliefs that seem so deeply ingrained in ourselves that they seem like second nature. Then there are aspects of ourselves that we are often unaware of — the unconscious mind. Before we go through this laundry list to evaluate which elements of the self are more changeable, let us quickly discuss why we would consider changing ourselves in the first place.

If we lived in a world where we were the only human being, we probably did not need to change ourselves that much, with the exception of learning behaviors that prevent physical pain, increase sensorial pleasure and ensure survival by meeting our bodily needs. If we had an anger management problem in such a world, we may not be motivated to change because acting on it may not yield much negative impact. Perhaps we may hurt ourselves if we punched a rock — in which case we may change mainly to minimize pain, as mentioned earlier, but not to address the anger. Fortunately in our reality, no man is an island — we live in an interconnected society that is filled with rich social relationships, where individual acts can have social outcomes. Humans are social creatures and our relationships are very meaningful to us. Therefore, on top of the aforementioned reasons for change, we also try to prevent emotional pain and increase psychological wellness, not just on an individual level but expanded to a wider social dimension. The earlier example of an anger management issue would have more serious consequences due to the potential to harm others. The person would also be more pressured to change due to socioemotional mechanisms of guilt and shame. Many of our personal behavioral changes, therefore, can be traced to our desire to be good for our society.

Now, back to the laundry list – which of the previously stated aspects of the self are we more able to change? Alterations made to the body are commonplace in certain areas of the world and to specific groups of people. However, in general, it is something that is not easily changed. Procedures can be painful, expensive, and sometimes even endanger a person’s life. I guess this is why judging anyone based on how they look feels wrong. Next, the unconscious mind is usually out-of-reach to us unless we seek psychoanalytic intervention, which often requires professional help. It is important to note, however, that the psychoanalytic definition of the unconscious is still debated to this day. If we take the cognitive definition of the unconscious and extend it to the realm of implicit cognitive biases and heuristics (as pioneered by Daniel Kahneman and Amos Tversky), we can counteract some of these automatic processes through conscious compensation. Therefore, we find ourselves in the realm of conscious thought and feeling, which includes sense perception, emotion (a.k.a. affect), cognition, and belief. By definition, we are aware of our consciousness, which makes it the most actively changeable aspect of our self relative to the previous two (i.e. body and unconscious).

We are aware of our sense perceptions, but they are generally unchanged by conscious thought. We can, however, compensate for perceptual illusions by being aware of them. Emotions, especially intense ones like anger and grief, can sometimes be felt viscerally, but they can be regulated through thought. Our emotions often come from our interpretation of certain events that occur in our life. The area of practical philosophy, which aims to aid people in living “wiser, more reflective lives,” has been a central part of philosophers’ work since Laozi and Socrates and likely predates them. Hence, even if we feel strongly about something that happened to us, we are able to respond in a measured way, sometimes by reframing the experience in different ways.

Cognition refers to the mental activities involved in acquiring knowledge and understanding. It can be strengthened through various thinking tools and approaches that we learn and then employ to solve problems and make decisions. It is probably one of the most changeable parts of our mind, as seen from the huge investments that societies around the world put into educating people, especially the young, to read, write and do arithmetic. Based on the World Bank’s figures, we spent around 4.53% of global GDP, equivalent to US$3.68 trillion ($3,682,348,740,000), on education in 2017. Generally speaking, someone who has a better understanding of how the world works should be able to behave in a way that benefits themselves and their society. They may also be in a position that helps them understand complex, strategic, and long-term decisions that require trade-offs, compromises, and short-term sacrifices. Therefore, learning — specifically the acquisition of knowledge and skills — remains to be a powerful force for both personal improvement and social mobility.

Writer’s note: I realized that this topic cannot be adequately discussed in a single 1000-word essay. Click here for part 2.

]]>