intuition – Matte Lim https://archive.mattelim.com Design Tech Art Sun, 10 Apr 2022 12:30:59 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.3 https://archive.mattelim.com/wp-content/uploads/2022/04/mattelim8.png intuition – Matte Lim https://archive.mattelim.com 32 32 Limitations to understanding (pt. 2): Mind https://archive.mattelim.com/limitations-to-understanding-pt-2-mind/ Sun, 06 Jun 2021 16:38:45 +0000 https://archive.mattelim.com/?p=262 Writer’s note: this is part two of a three-part essay. Click here for part one.

For the second part of this essay, I will be looking at the limitations of the mind in facilitating the processes of knowing and understanding. To narrow the scope of this part, I will be limiting the discussion to mental processes at the individual level and how our minds process and extend information. That said, this essay can only visit these topics in a cursory manner and some of them will be explored in greater detail in future essays. Aspects of the mind that will be considered are its relationship to senses, conscious mental phenomenon (like rationality and more broadly cognition), and less conscious ones (like subjectivity).

As mentioned in part one of this essay, the senses are the connection between the outer world and inner experience. Without such inputs, there are no stimuli for our minds to process. If the mind was a food processor, the senses are akin to the opening at the top of the machine, allowing food to be put into the processor chamber, where the magic happens. Without the opening, the food processor is as good as a collection of metal and plastic in a sealed vitrine and rid of their functional purpose, almost like the objects in works of art by Joseph Beuys or Jeff Koons. Similarly, the mind will not be able to work its magic without information provided by the senses. Consequently, the ability of the mind in processing and creating mental representations is limited by the modality of our sensory experiences. If we were to try to imagine a rainforest in our mind, we would likely visualize trees and animals or perhaps recall the sounds of insects and streams. However, we will not be able to mentally recreate it in terms of its magnetic field, which other animals may be able to

Rationality

One possible escape path from the limitations of sensory experience is rationality. To be rational is to make inferences and come to conclusions through reason, which is mainly an abstract process (as opposed to concrete sensory experiences). A definition of reason is to “think, understand, and form judgments logically”. Through reason, humans can identify causal relationships through observation and formulate theories to extrapolate new knowledge; this process is also known as inductive reasoning. Theories of causality are the basis of science, which has enabled us to build the modern world. However, we often make mistakes with causation. One type of error is the confusion between correlation and causation. An often-used example is the correlation between ice cream sales and homicide rate. Ice cream does not cause homicides, neither do homicides cause increased interest in the dessert. What explains this correlation likely has to do with hot weather instead. The Latin technical term for such causal fallacies is non causa pro causa (literal translation: non-cause for cause). Our thinking is riddled with fallacies — so many that there is no way that I can cover even a fraction in this essay. 

The notion of causality itself has even been called into question by the Scottish Enlightenment philosopher David Hume. He pointed out that causality is not something that can be observed like the yellow color of a lemon or the barking sound made by a dog. When a moving billiard ball hits a stationary billiard ball, we may conclude that the first caused the second to move. If we examine our experience closer, we realize that we have made two observations: the first ball moving, followed by the second. However, the causal relationship connecting them is an inference imposed by our mind. Our senses can be easily fooled by magnetically controlled billiard balls that sufficiently replicate our prior experiences. In which case, our inference would be completely incorrect. Hume points out that what we usually regard as causal truths are often just conventions (also referred to as customs or habits) that have hitherto worked well. We are creatures of habit — we do not reason every single situation we are faced with — most of us would very much prefer to get on with life by relying on a set of useful assumptions. However, we have to be aware of these shortcuts that we are making. 

Most definitions of the word “reason” include the term “logic”. The most rigorous type of logic known to humans is formal logic, which is the foundation of many fields, such as mathematics, computer science, linguistics, and philosophy. Logic provides practitioners across these different fields with watertight deductive systems with which true statements can be properly inferred from prior ones. While logic is traditionally thought of as a primarily abstract and symbolic mental process, I believe that logic has a profound relationship with concrete sensory experiences. A popular form of a logical argument is syllogism (although it is antiquated and no longer used in academic logic). Here is an example: All cats are animals. Jasper is a cat. Thus, Jasper is an animal. Research has shown that people are generally more accurate at deducing logical conclusions when the problems are presented as Venn and Euler diagrams instead of words and symbols. This suggests that even for such seemingly abstract and symbolic mental tasks, our minds find visual representations more intuitive and comprehensible. It is for the same reason that humans find it so difficult to understand any dataset that has more than three variables. We are bounded by three dimensions not only physically but also mentally — at most, we can create a chart with three axes (x, y, z) but we are just not able to envision four or more dimensions. This is the same reason why we can know about a tesseract (or any higher-dimensional hypercube) but can never picture it and therefore never fully understand it. While we are on the subject of diagrams and logic — do you know that a four-circle Venn diagram does not completely show all possible sets? The closest complete representation requires spheres (3D) or ellipses (2D). Even more astonishing are the Venn diagrams for higher numbers of sets. Perhaps the comprehension of abstract logic does not require these concrete diagrams, but without them such ideas are far less understandable, especially for people who are not logicians. Reason has led us to be able to create machine learning models and scientific theories that utilize high-dimensional space but we are ultimately only able to grasp them through low-dimensional analogs, which to me suggests that complete understanding is impossible.

A fascinating development has occurred in logic in the past century — we now know through logic that there are things that cannot be known through logic. In the early 20th century, David Hilbert, a mathematician who championed a philosophy of mathematics known as formalism, proposed a solution known as Hilbert’s program that sought to address the foundational crisis of mathematics. Simply put, the program stated that mathematics can be wholly defined by itself without any internal contradictions. More generally, Hilbert was responding against the notion that there will always be limits to scientific knowledge, epitomized by the Latin maxim, “ignoramus et ignorabimus” (“we do not know and will not know”). Hilbert famously proclaimed in 1930, “Wir müssen wissen – wir werden wissen” (“We must know — we will know”). Unfortunately for Hilbert, just a day before he said that, Kurt Gödel, who was a young logician at the time, presented the first of his now-famous Incompleteness Theorems. (At the risk of sounding simplistic here,) the theorems essentially proved that Hilbert’s program (as originally stated) is impossible — neither can mathematics be completely proven, nor can it be proven to be free of contradictions. In 1936, Alan Turing (the polymath behind the Enigma machine) proved that the halting problem cannot be solved, which paved the way for the discovery of other undecidable problems in mathematics and computation. (Veritasium/ Derek Muller made a great explanatory video on this topic.)

Logic (especially the formal variant) is a specific mental tool. It has limited use in our everyday lives, where we are often faced not only with incomplete information but also questions that cannot be answered by logic alone. Most of us are not logical positivists — we believe that there are meaningful questions beyond the scope of science and logic. That is why we turn to other mental tools in an attempt to figure out the world around us.

You may have noticed that I used various metaphors to describe the relationship between the senses, the mind, and culture twice in this essay. I first compared it to a computer and later invoked the somewhat absurd analogy of a food processor. Metaphors work by drawing specific similarities between something incomprehensible and something that is generally better understood. Language is not only used literally, it is often used figuratively through figures of speech. Metaphors belong to a subcategory of figures of speech called tropes, which are “words or phrases whose contextual meaning differs from the manner or sense in which they are ordinarily used”. While tropes like analogies, metaphors, and similes are used to make certain aspects of an object or idea more relatable, they can ironically also cause us to misunderstand or overconstrue the original thing that we are trying to explain. If I were to take the earlier metaphor that I used out of context — the opening of a food processor is like the relationship between the senses and the mind — what am I really saying here? That the mind reduces sense perceptions into smaller bits? Or that senses are just passive openings to the outside world? Metaphors can easily break down by overextension beyond their intended use. This finicky aspect of metaphors was discussed by the poet Robert Frost in a 1931 talk at Amherst College, where he brought up the metaphor of comparing the universe with a machine. Later in the talk, he states that “All metaphor breaks down somewhere. That is the beauty of it.” While metaphors can clarify a thought at a specific moment, they can never explain the idea in totality.

This substitutive or comparative approach to thinking extends beyond metaphors and related rhetorical devices. We often approximate understanding by substituting an immeasurable or directly unobservable phenomenon with an observable one that we deem is close enough. An example of this is proxies, which I explored in a previous essay. Another close cousin is mental models, which attempt to approximate the complex real-world using a simplified set of measurable data connected through theory. General examples are statistical models and scientific models; more specifically applied ones are atmospheric models, used to make meteorological predictions, economic models, which have been criticized time and again for their unreliability, and political forecasting models, which delivered two extremely historic upsets in the UK and USA in 2016. The statistician George Box said that “All models are wrong, but some are useful”, a view widely held by his forebears. Models may get us close to understanding our world but are unlikely to ever fully encompass the complexity of reality. A visual model that we use every day without s second thought is maps. As maps are 2D projections of 3D space, they will never accurately represent the earth. The Mercator projection that we are most familiar with (used on Google Maps) is egregiously inaccurate in representing relative sizes of geographical areas. This topic has been explored by many (National Geographic, Vox, and TED). In particular, some have pointed out how such misrepresentations can undermine global equity

Another way that the mind approximates the understanding of complexity is through heuristics. American Psychological Association (APA) defines heuristics as “rules-of-thumb that can be applied to guide decision-making based on a more limited subset of the available information.” The study of heuristics in human decision-making was developed by the psychologists Amos Tversky and Daniel Kahneman. Kahneman discusses many of their findings in his bestseller Thinking Fast and Slow, including various mental shortcuts that the mind takes to arrive at a satisfactory decision. One example is the availability heuristic, “that relies on immediate examples that come to a given person’s mind when evaluating a specific topic.” Are there more words that start with the letter “t” or have “t” as the third letter? We may be inclined to pick the former since it is difficult to recall the latter. However, a quick google search will show you that there are many more words that have “t” as the third letter (19711) as opposed to the starting letter (13919). This example shows that our understanding is limited by how our mind usually recalls ideas and objects by a specific attribute — in this case, how we remember words by their starting letters. Tversky and Kahneman’s work was inspired by earlier research done by economist and cognitive psychologist Herbert A. Simon. Simon coined the term “bounded rationality”, which is the notion that under time and mental capability constraints, humans seek a satisfactory solution rather than an optimal one that takes into account all known factors that may affect the decision. 

When faced with a complex world, our minds simplify phenomena into elements that we can understand. Kahneman states that “When faced with a difficult question, we often answer an easier one instead, usually without noticing the substitution.” He calls this process attribute substitution and believes that it causes people to use heuristics and other mental shortcuts. More broadly, simplification is a key pillar in the way that we currently approach our world; this attitude is known as reductionism. It is defined as an “intellectual and philosophical position that interprets a complex system as the sum of its parts.” However, in this process of reduction, holistic aspects and emergent properties are overlooked. Critics have therefore referred to this approach with the pejorative nickname “fragmentalism”. The components that comprise our understanding have gone through a translation from complexity to simplicity. It is not ridiculous to suggest that things are lost in that translation, thus impairing our understanding.

Non-rational processes

Thus far, we have mostly discussed rational (both formal and informal) and conscious mental processes that may inhibit understanding. We will now take a look at how the contrary can do the same. There are two non-rational phenomena that we can explore: intuition and emotion. 

Intuition refers to “the ability to understand something instinctively, without the need for conscious reasoning.” A more colloquial definition is a “gut feeling based on experience.” While intuition is useful, most notably by the writer Malcolm Gladwell in his book Blink, it has also been shown to create flawed understanding. Herbert Simon once stated that intuition is “nothing more and nothing less than recognition” of similar prior experience. In his research, Daniel Kahneman found that the development of intuition has three prerequisites: (1) regularity, (2) a lot of practice, and (3) immediate feedback. Based on these requirements, Kahneman believes that the intuitions of some “experts” are suspicious. This was shown by research done by psychologist James Shanteau, who identified several occupations where experienced professionals do not perform better than novices, such as stockbrokers, clinical psychologists, and court judges. In scenarios where intuition cannot be developed, it becomes merely a mental illusion. Kahneman cites a now well-known example in investing that index funds regularly outperform funds managed by highly paid specialists. Intuition can also often lead us away from correctly understanding the world. This can be demonstrated by the field of probability, which can be very counterintuitive. The Monty Hall problem is a classic example of how our intuition, no matter how apparent it seems, can fool us. To me, the term “intuitive understanding” may be an oxymoron or a misnomer because our intuitions are not understood by ourselves. One way to demonstrate understanding is through explanation. A gut feeling may compel us to act in a certain way but crucially we are not able to explain why. If we are, then it is no longer intuition and resembles rationality instead. When looked at this way, intuition is good for taking action and making quick judgments but at best only provides a starting hypothesis for actual understanding.

Some may argue that emotion does not belong in a discussion about the mind as we tend to associate the mind with thoughts and not feelings. However, we cannot deny that emotion shapes our thoughts and vice versa. Emotion can move us to seek knowledge and understanding but can similarly deter us from them. When we are anxious, we may rush to conclusions without complete understanding. Fear can cause us to accept superstitions that undermine factual understanding. Sometimes, we may refuse to understand something if it can cause us to have a fundamental shift in the way we approach the world (I touched on this in a previous essay). This attitude is summed up by the saying, “ignorance is bliss.” The relationship between emotion and understanding often extends into wider society and will be revisited in the next part of this essay when I discuss culture.

Less conscious phenomenon

There are less conscious parts of our mind that impede understanding. There seems to be an inherent structure to our mind and consciousness, which could limit our ability to understand. Historically, there have been two methods to approach this: the more philosophically-inclined phenomenology or the more empirical study of cognitive science. One idea from phenomenology is the notion of intentionality, which “refers to the notion that consciousness is always the consciousness of something.” This suggests that we cannot study consciousness directly, but through how it conceives other things. An analogy for this is the light coming out of a headlamp — I am not able to see it directly since it is strapped on my head, but I can understand its qualities (e.g. color and brightness) through the objects that it illuminates. Therefore, we may never be able to fully understand our consciousness. From cognitive science, there are concepts like biases and pattern recognition. Cognitive biases refer to “systematic pattern[s] of deviation from norm or rationality in judgment”, there is a long list of them. Biases can lead us to only seek information that confirms our prior knowledge, which can be wrong in the first place. The same information, when framed differently, can also appear to us as fundamentally distinct, which seems to reveal a glitch in our understanding. Our mind is also constantly recognizing patterns in our daily life, it is a way in which the mind incorporates new experiences with prior ones. Pattern recognition is also fundamental to essential aspects of being human, like recognizing faces, using language, and appreciating music. However, our minds can also erroneously notice patterns where there are none, a condition known as apophenia. We experience an everyday variant of this whenever we perceive a face in an otherwise faceless object. This can cause us to misunderstand reality, a dangerous example being conspiracy theories that cause people to believe in absolute nonsense.

The mind is always positioned from a subjective perspective. We will never be able to think outside of our self. Our personal experiences and temperament can lead us to very different understandings of the world. The sociologist Max Weber pointed out that “All knowledge of cultural reality, as may be seen, is always knowledge from particular points of view.” How do we determine the accuracy of our understanding when there are multiple perspectives? Given the unfeasibility of capturing every unique perspective, can we claim to understand subjective experiences? Subjectivity also suggests that there is a limit to the understanding of psychological phenomena. Many subtopics that we have discussed in this essay — senses, rationality, intuition, emotions — are ultimately internal experiences that cannot be confirmed by third-person objective observation. When someone says that they feel happy and another person says that they feel the same, would we ever know if they are experiencing identical feelings? 

Similar to how our senses are limited, our minds likely have constraints — we will never know what we cannot know. Our minds are a result of hundreds of millions of years of evolution to ensure survival; the ability to know and understand seems to be a nice side-effect from this perspective. As far as we can tell, human beings represent the epitome of the universe in understanding itself but it is not difficult to fathom our mental capacity as being just a point in a long continuum. While we will continue to know and understand more, we should never let hubris deceive us into thinking that our minds will be able to understand all that there is.

Writer’s note: this is part two of a three-part essay. Click here for part three.

]]>
Borrowed Time https://archive.mattelim.com/borrowed-time/ Tue, 12 Nov 2013 10:51:05 +0000 https://archive.mattelim.com/?p=99 This essay was written for an undergraduate philosophy class called “Philosophy of Death” in the fall of 2013. The lecturer was Prof. Donald Keefer.

In everyday situations, human beings are forced to make decisions based on a set of non-conscious beliefs and value systems. These form part of one’s intuition in dealing with immediate, urgent considerations, usually leaving the person no time to carefully make sense of the given scenario. These intuitions form a set of working principles with which we navigate our world.

One of these working principles that most would agree with is the idea that all lives have equal value. When this working principle is put to the test, however, we can easily see how some people are usually “more equal” than others. More often than not, this general principle is overridden by other non-conscious intuitions based on the specific situation faced by any individual. The more interesting observation is how these intuitions seem to be the same for most people. These complex, intuitive value systems appear simply as common sense to most, but the mechanics of it is completely invisible and yet generally universal.

We shall now turn to a classic thought experiment to test this guiding principle: the trolley problem. Suppose we have a train moving at an extremely high speed and reaching a fork and you are the train operator. Let us assume that the train tracks were not properly designed, and this fork leads to the same destination. It is up to you to decide which train track to use when the train has reached the fork. It just so happens that a fifty year old man and a baby were on either sides of the fork. Let us also assume that avoiding the choice of selecting one path is impossible, that you have to make a decision about who you would save. More often than not, most respondents to this question would choose to save the baby than the old man. If the guiding principle that “all lives have equal value” is true, statistically it should be proven through an equal number of respondents choosing between the baby and the old man. A preliminary conclusion at this point therefore, is that humans are predisposed to believing that the length of our life is related to its value. This suggests that it is more fair for someone to die if s/he has been able to live a relatively longer life. The first guiding principle has been easily thwarted by the introduction of age.

This scenario would be a serious dilemma for most ethical systems. Take for example both Kantian and Utilitarian ethics. A Kantian ethicist would argue that one has equal duty to save both lives, but it provides no answers as to which life should be saved. The Utilitarian argument is as feeble in this context; the decision of who should be saved has to be made based on weighing the pains and pleasures that result because of the choice. First, to make that analysis within a split second is impossible. Second, the analysis of pain and pleasure is so subjective that one case could easily be argued over the other, given ordinary circumstances (that both individuals have loved ones who still exist and would feel pain from their death).

From a purely economic standpoint, saving the baby is not a fiscally wise decision. Due to the intertwining, complex nature of modern civilisation, it is reasonable to argue that our lives are supported by the society at large. Most of our essentials are purchased and have been through the hands of many people before our use. Therefore, everyone is incurring a debt to society starting from the point at which they are born by being a dependent of the larger society until they become a working adult. A child is nurtured through the care of parents to become a responsible citizen who would eventually contribute to society and begin to pay off his dues slowly. The baby is and would remain a dependent for the immediate future of his/her life. The 50 year old however, assuming that s/he has led a normal, productive life, has already paid his/her dues to society and perhaps has already contributed a significant portion to the society’s well-being in general. The economic argument for saving the baby therefore, is the potential that s/he has in contributing more back to society compared to the old man, which is only a hypothetical possibility.

The conundrum of the relationship between the length and the value of lives continues in philosophy. As Epicurus has mentioned in his Letter to Menoeceus, he argued that death is not evil, but instead indifferent. Since Epicurus believed in the hedonistic thesis that the human experience boils down to pleasure and pain, much like the proposals of later Utilitarians, death is by itself a neutral occurrence since it takes away the possibilities of experiencing both pleasure and pain (Scarre, 87). Epicurus’ argument further extends to the implication that when we die does not matter, because at the point of death, we cease to be.

Feldman tries to refute Epicurus’ argument by proposing hypothetical possible worlds that one’s life could be compared to (Scarre, 91). Feldman argues through the analogy of the dead ploughboy the other better lives he could have led. His case falls apart easily because for every better scenario that can be imagined, a worse outcome can also be fabricated.

In Death, Shelly Kagan argues that death is bad through the deprivation account, which is essentially similar to arguments made by Feldman. He later concludes by saying that puzzles to that question remain. Before diving too deeply into the argument about the evil of death, one can clearly observe that one of the causes for all these debates is how humans are intuitively predisposed to believing that a longer life is an inherent good.

However, these do not fully explain our intuitions to choose to save the baby because both individuals have the potential to live long, fruitful lives. Even if we take into account this assumption however, the same intuitions apply: the baby would tend to be saved significantly more than the old man.

Now assume that you, the train operator can look into the future and see the lives of these two individuals. Suppose the child and the old man both have an equal amount of time left living in the world. This additional information shifts the scale, but not significantly. It is almost as if we see our lives as a time bomb, with a set-off time of the average life expectancy at any given moment. The longer the time we have left, the more valuable the life of an individual.

However, when more details are added to the situation, the balance tips. Suppose the baby and old man each have ten years more to live, but the baby died young due to a painful disease whereas the old man dies healthy in his sleep. This additional information causes us to want to save the old man more than the baby. Again, suppose the baby does not grow up to lead a fruitful life, for example s/he suffers a depressing illness throughout his/her life or mixed with wrong company earlier in his life and wastes his entire life as a criminal, whereas the old man goes on to lead a relatively shorter but happy period of time. The same intuitions to save the old man apply.

Arguably, this adds another dimension in this procession of our intuition. These series of intuition tests start to give form to our intangible, complicated intuitions. Our intuition seems to work like a non-conscious operational flow chart, driven by our values and priorities at any given moment. It accepts exceptions to rules and is extremely flexible at dealing with complex situations, and amazingly all without deliberate, rational thought. At this point, a simplification of our general disposition is that humans value the potential of lives for pleasure. Death terminates this potential, and therefore is seen as an evil.

Although our intuitions give us guiding principles which are very useful in everyday life, we should not stop challenging them through rational thought. Bringing these intuitions to light is important for us to take action. These intuition tests reveal the irrational but generally universal traits of human intuition. When we know our tendencies toward certain choices, we can make better assessment and judgment about whether they are truly good decisions. Although humans have the ability to rationalise and make good and deliberate decisions, we have to realise that much of our lives occur through intuitive, automatic reaction. The analysis of intuition could point toward a direction for more robust ethical systems. By understanding our intuitions, we can also make better sense of our impulses and direct more meaningful lives for ourselves.

Works Cited

Scarre, Geoffrey. Death. Montreal: McGill-Queen’s University Press, 2007. Print.

Kagan, Shelly. Death. New Haven: Yale University Press, 2012. Print.

]]>