meaning – Matte Lim https://archive.mattelim.com Design Tech Art Sun, 14 May 2023 03:03:14 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.3 https://archive.mattelim.com/wp-content/uploads/2022/04/mattelim8.png meaning – Matte Lim https://archive.mattelim.com 32 32 Do AIs “think”? The challenge of AI anthropomorphization https://archive.mattelim.com/do-ais-think-the-challenge-of-ai-anthropomorphization/ Sun, 14 May 2023 03:03:14 +0000 https://archive.mattelim.com/?p=788 There has been an acceleration of artificial intelligence (AI) in the past year, especially in chatbot AIs. OpenAI’s ChatGPT became the fastest app to reach 100 million monthly active users within a short span of two months. For reference, the runner-up TikTok took nine months — more than four times — to reach those numbers. ChatGPT’s release has sparked an AI race, pushing tech giants Google and Alibaba to release their versions of AI chatbots, namely Bard and Tongyi Qianwen respectively. ChatGPT marks a big change in the way we interface with machines — the use of human language. As chatbots become increasingly sophisticated, they will begin to exhibit more “agentic” behavior. OpenAI defines “agentic” in the technical report released alongside GPT-4, that is the ability of AI to “accomplish goals which may not have been concretely specified and which have not appeared in training; focus on achieving specific, quantifiable objectives; and do long-term planning.” The combination of the use of human language as well as increasingly “agentic” capabilities will make it very challenging for humans to not anthropomorphize chatbots and AI in general. The anthropomorphization of AI may lead to society becoming more accepting of different use cases for AI, which could become problematic.

In a podcast interview with Kara Swisher, Sam Altman, the CEO of OpenAI, talked about naming their large language model (LLM) GPT-4 using a combination of “letters plus a number” to avoid people from anthropomorphizing the AI. This has not stopped other AI companies from giving their creations human names. Naming aside, it is almost impossible to avoid using human terms to describe AI. The use of the word “agentic”, with quotation marks, points to how the development of AI is butting up against our current vocabulary. We use words that are conventionally reserved for human minds. When chatbots take time to respond to prompts, it is difficult not to label that processing of information as some form of “thinking”. When a chatbot is able to process our prompt in the way that we intended, it makes it feel like it “understands” what we are communicating. The leading issues around AI similarly use human terminology. “Hallucination” occurs when a chatbot confidently provides a response that is completely made up. A huge area of AI research is dedicated to the “alignment” problem, which according to Wikipedia, “aims to steer AI systems towards humans’ intended goals, preferences, or ethical principles.” To the uninformed, this sounds very much like civic and moral education for students.

Humans tend toward anthropomorphism. We explain things for human understanding and often anthropomorphism helps to communicate abstract ideas. Nature documentary hosts would give names to every individual in a pride of lions and lionesses, describe their fights as familial or tribal feuds, and dramatize the animals’ lives from a human perspective. The 18th-century Scottish philosopher Adam Smith uses the term “invisible hand” to describe how self-interest can lead to beneficial social outcomes. Researchers have found that anthropomorphic language can help us learn and remember what we have learned. As AIs exhibit increasingly human-like capabilities, it will be a challenge for people to not anthropomorphize them because we will use human-analogous words to describe them.

If we are not careful in delineating AI, which is ultimately a set of mathematical operations, from its human-like characteristics, we may become more accepting of using it for other purposes. One particularly tricky area is the use of AI as relational agents. The former U.S. Surgeon General, Vivek Murthy called loneliness a public health “epidemic”, this view is echoed by many. A 2019 survey by Cigna, a health insurer, found that 61 percent of Americans report feeling lonely. It is not unimaginable for people to think that conversational AI can help relieve loneliness, which the US CDC reports is linked to serious health conditions in older adults. If there is demand for such services and money to be made, businesses will meet that demand, especially since most cutting-edge AI research is conducted by commercial enterprises. In fact, there are already similar situations occurring. In Japan, owners of the Sony Aibo robot dog are known to conduct funerals for their robot companions. While the robot dogs are definitely not alive, they have touched the lives of their owners in a real way. An article in the San Francisco Chronicle reported on how a Canadian man created a chatbot modeled after his dead fiancé to help with his grief. If chatbots were to make it easier for people to feel less lonely, would it lower the effort that people put into forging real relationships with actual full human beings, which may not be as acquiescent as their artificial companions? How would human society evolve in those circumstances? As technology has been often used as a wedge to divide society, would AI drive us further apart?

Besides the more overt issues that come with anthropomorphizing AI, there may able be less perceptible changes that occur beneath our noses. Machines are tools that humans use to multiply and extend our own physical and mental efforts. Until now, the user interface between humans and machines was distinct from human communication. We turn dials and knobs, flick switches, and push buttons to operate physical machines. We drag a mouse, type into a screen, and use programming languages to get computers to do our bidding. Now, we use natural language to communicate with chatbots. For the first time in history, the medium in which we interact with a machine is the same as that of cultural communication. We may eventually come to a point where most natural language communication takes place not between humans, but with a machine. How might that change language over time? How would that change the way that humans interact with one another? In a TED talk by Greg Brockman, President of OpenAI, he joked about saying “please” to ChatGPT, adding that it is “always good to be polite.” However, the fact is that machines do not have feelings — do we dispense with courtesies in our communication with AI? If we continue to say “please” and “thank you”, are we unwittingly and subconsciously anthropomorphizing AI?

Perhaps we need to expand our vocabulary to distinguish between human and AI behavior. Instead of using quotation marks, perhaps we could add a prefix that suggests the simulated nature of the observed behavior: sim-thinking, sim-understanding, sim-intentions. It does not quite roll off the tongue, but it may help us be more intentional in our descriptions. In response to an interviewer’s questions about how LLMs are “just predicting the next word”, Geoffrey Hinton, a pioneer in AI research, responded, “What do you need to understand about what’s being said so far in order to predict the next word accurately? And basically, you have to understand what’s being said to predict that next word, so you’re just autocomplete too.” Hinton got into AI research through cognitive science and wanted to understand the human mind. His response just goes to show how little we comprehend whatever happens in our heads. Hopefully, AI can someday help us with this. The tables might flip and we may see AI as our reflection — maybe we find out sim-thinking and thinking are not that different after all — if we survive the AI upheaval that is.

]]>
Limitations to understanding (pt. 3): Culture https://archive.mattelim.com/limitations-to-understanding-pt-3-culture/ Sun, 06 Mar 2022 13:40:15 +0000 https://archive.mattelim.com/?p=281 Writer’s note: this is part three of a three-part essay. Click here for part two.

In the previous two parts of the essay, I’ve discussed how our senses and mind could limit our ability to understand the world. I will be concluding this three-part essay by turning my focus to culture. First, a working definition of culture: “The arts and other manifestations of human intellectual achievement regarded collectively.” This is one of the broader versions of the word, which encompasses all collective human creation (including technology) and across different geographical areas. 

No man is an island. I think it is important to state the significance of this, even though it seems plainly obvious. All of our thoughts are shaped by prior thinking conceived by someone else. For instance, when we try to communicate and manifest abstract thoughts and feelings verbally, we use words that we did not invent. When collectively aggregated, the whole of this precedent thinking is equivalent to culture. 

One approach to wrap our heads around this is structuralism, which began in the early 20th century (unsurprisingly) within the field of linguistics. Structural linguists realized that the meaning of a word is dependent on how they relate to other words in the language. Earlier, we defined the word “culture” using a string of other words. Every word is defined by other words. We can imagine language as a network of relationships between words. The implication of this is that a word has no meaning on its own, except where it fits structurally in the system. Over time, this idea became applied in other fields like anthropology and sociology, notably by figures like Claude Lévi-Strauss. Structuralism then became a “general theory of culture and methodology that implies that elements of human culture must be understood by way of their relationship to a broader system.” Structuralism, simply put, is an approach to understanding cultural “phenomena using the metaphor of language.”

The structuralist approach can be similarly applied to what we think, feel, know and understand. Coming back to the main thesis of this essay — what and how we understand is shaped and limited by culture. Several thinkers have explored this in their own ways. Zeitgeist, a German word that literally translates as “time spirit” (or less clunkily, “spirit of the time”) is a term that is commonly associated with Hegel. The term is defined as “the defining spirit or mood of a particular period of history as shown by the ideas and beliefs of the time.” This shows that there is an acknowledgment of how certain ideas and beliefs are bound to a specific time at least since the 1800s. Marx later built upon the idea with the bedrock concepts base and superstructure. He defined base as the economic production of society and superstructure as the non-economic aspects of society, like culture, politics, religion and media. (Do note that my definition of culture includes both base and superstructure, but we can continue for the time being.) Marx’s thesis is that products of culture (superstructure) are shaped by means of production (base). This, to some extent, was built on Hegel’s zeitgeist and explains how and why ideas and beliefs change over time. 

The two (similar) concepts that are most relevant to this essay came later. The first is episteme, coined by Michel Foucault. The second and perhaps more popularly known idea is paradigm (shift) by Thomas Kuhn. In Foucault’s book, The Order of Things, he describes episteme: “In any given culture and at any given moment, there is always only one episteme that defines the conditions of possibility of all knowledge, whether expressed in a theory or silently invested in a practice.” In other words, Foucault claims that the episteme sets the boundaries of what can be even thought of by individuals of a culture – a sort of ‘epistemological unconscious’ of an era. Kuhn, a historian of science, described paradigm shift in his book The Structure of Scientific Revolutions, as “the successive transition from one paradigm to another via revolution” and claimed that it “is the usual developmental pattern of mature science.” While Kuhn used the term purely within the scientific context, it has become more generally used over time. Examples of scientific paradigm shifts include the Copernican Revolution, Darwin’s theory of evolution and, more recently, Einstein’s theory of special relativity. Each of these shook the scientific establishment of the time and, in the case of the former, resulted in banned books and Galileo’s house imprisonment. We can see from the first two examples that society can be resistant to change, despite overwhelming evidence. This further cements the notion that ideas can sometimes be too far beyond what can be accepted by predominant culture. 

Culture shapes and, therefore, limits our understanding in a variety of ways. Culture defines who gains access to knowledge and understanding. According to UNICEF, only 49% of countries have equal access to primary education for both boys and girls. The numbers only get worse higher along the educational pathway. The gender disparity in education can be traced back to gender stereotypes and biases. Such implicit biases extend to inaccurate and unfair views of people based on their race, socioeconomic status and even their profession. They are insidiously absorbed through experience based on the social norms of our time and go undetected unless they are specifically made conscious. A form of philosophy and social sciences known as critical theory, started by the Frankfurt School in the early 1900s, aims to free human beings from prevailing forms of domination and oppression by calling attention to existing beliefs and practices. A development known as critical race theory, which seeks to examine the intersection of race and law in the USA, has recently been facing pushback in states such as Texas and Pennsylvania through book bans or restrictions within K-12 education. In this, we see a formal restriction of understanding by culture (in the form of a public institution). Further upstream in knowledge production, research deemed to be socially taboo can be severely limited. An example is the legal contradiction faced by scholars looking into the medicinal benefits of marijuana. The issue is nicely summed up by the following sentence from this article by Arit John: “marijuana is illegal because the DEA says it has no proven medical value, but researchers have to get approval from the DEA to research marijuana’s medical value.” 

Beyond such visible examples, I think it is important to emphasize that a majority of how our individual understanding is shaped by the culture we are embedded in is hidden in plain sight. It is only in retrospect that misguided views and practices may seem obvious today. Up until the 1980s in the UK, homosexuality was a mental disorder treated by electroconvulsive therapy. Homosexuality was removed from the World Health Organisation’s International Classification of Diseases (ICD) only in 1992. Besides comparing cultural attitudes with those from the past, they can also be identified through intersubjectivity by comparing different cultures. In Singapore, homosexual acts are considered illegal based on Section 377A of the Penal Code, an inheritance from its past as a British colony. Other former colonies like Hong Kong and Australia have since repealed the law. Culture implicitly and explicitly defines what is normal within a group or society. As stated by Marshall McLuhan in his book The Medium is the Massage, “Environments are not passive wrappings, but are, rather, active processes which are invisible. The ground rules, pervasive structure, and overall patterns of environments elude easy perception.” This echoes a story from a speech by David Foster Wallace in which an older fish asks younger fishes about the water, to which they later respond, “What the hell is water?” Normality is invisible in our daily lives, we do not notice it because it is the ground on which we (and all of our perceptions and thoughts) stand.

Like words, culture is self-referential. Culture shapes culture. This not only applies to how current culture gives rise to future culture but also operates in the reverse direction, where today’s culture can be used to look at yesterday’s culture. This reminds me of how the art critic Jerry Saltz says in this lecture that “all art is contemporary art because I’m seeing it now.” Strangely, our visions of the future and our recollection of the past are and can only be done through the filter of the present moment. To repurpose a famous quote on McLuhan by his friend John Culkin — culture shapes the understanding of individuals, and individuals go on to shape culture. It is our collective human enterprise. Talks about culture often lead to the distinction between nature and culture, which distinguishes what is of/by human beings. Funny thing is, the nature-culture discourse is itself facilitated through culture. It seems, therefore, that all understanding is filtered through culture.

As I wrap up, I would like to address some issues that have increasingly become noticeable while writing this essay. First, I have rather simplistically equated knowing and understanding when they are differentiable mental processes. Second, there seem to be different flavors of understanding, which can be mostly grouped into two categories: objective and subjective. The physical sciences fall into the former, while the humanities and social sciences seem to fall into the latter. The issue here is that interpretation seems to play very different roles in either. For objective questioning (e.g. why does an apple fall toward the earth?), there is usually a convergence towards a single theory, whereas, for subjective questioning (e.g. why do people generally think that babies are cute?), there is a divergence in different approaches to understanding a single issue (sometimes even opposing viewpoints within an approach), none of which is definitive in explaining a phenomenon. Third and finally, how much of our understanding is motivated by our perspective and how much of our perspective is derived from understanding? Perhaps I will attempt these questions in future essays.

]]>
Taking Stock https://archive.mattelim.com/taking-stock/ Sun, 28 Feb 2021 14:21:16 +0000 https://archive.mattelim.com/?p=228 Last Friday, over 10,000 recent graduates of junior colleges (JC) and Millennia Institute (MI) gathered at their alma maters to receive their A-Level results. For these teenagers (most of whom are 18 or 19 years old), this event marks the end of a 14-year long journey through general education in Singapore, starting from kindergarten and ending in JC. Many other countries adopt a similar general education structure, which is increasingly labeled “K–12” internationally after being coined in the US. To be accurate, a majority of Singaporean students do not graduate from JCs, but other institutions that provide more specialized or vocational forms of instruction. I am biased towards this particular group of students, however, simply because I taught a tiny slice of the cohort. 

For these JC and MI students, receiving their results coincides with a pivotal choice that they will make in their lives. Prior to this point, making an independent decision about what to do with their lives has been rare. They may have to pick a secondary school after their Primary School Leaving Examination and/or choose among JCs and MI after their O-Level exams. However, this juncture is the first time that they have to pick a specialized path, one that will (for the most part) open the doors to a few jobs while simultaneously closing them for many others. The choice is a thorny one, with multiple criteria going into the decision arithmetic — family approval, cost of education, potential career options, passion for the subject, etc. However, if they do choose to continue their education, they will fall back into a familiar routine of structured learning, assessments, and grades.

For as long as we are enrolled in a school, we follow its set of rules, metrics, and schedules. Our performance is neatly packaged into a numerical score or a letter grade, published like clockwork at the end of every academic term. When we are students, these numbers often have an outsized impact on how we feel about ourselves and what we are worth. The power that these numbers have over us is not tied to its primary function, which is a proxy for our learning, but rather the larger mechanisms and narratives that it is embedded within. Education is an important tool for social mobility. In Singapore, the data shows that someone with higher qualifications generally earns a higher income. The power of school grades, therefore, lies in its ability to eventually lead to a life that bears the symbols of success. There is a saying in Chinese, “钱不是万能,但没钱万万不能”, which roughly translates into, “Money is not omnipotent, but without money, one cannot accomplish most things.” Singaporeans are known for such pragmatism, which has led to a national success narrative associating three things: good grades, good job, good life. It is unsurprising that after outsourcing our sense of achievement for most of our lives to numbers on a transcript, we hop onto yet another number to measure our success as adults — the amount of money we have. This leads people to think that, “As long as I score good grades, as long as I earn a lot of money — I will be successful and happy!” 

If only life is that simple. The narrative that achieving high numbers in grades and income automatically results in success is useful socioeconomically but does not paint the whole picture. We live in a world that requires the consumption of goods and services to keep economies running. Without a functioning economy, governments are not able to generate income from taxation that is required to run the state and protect its sovereignty. This necessitates a narrative that posits that the primary contribution that any average citizen makes to a nation-state is through production and consumption. Therefore, the feeling of success is not caused simply by earning loads of cash, but rather by what it means within such a narrative — being a productive member of society. It seems to me, therefore, that at the heart of our various pursuits is a deep longing for meaning and purpose. 

Meaning comes in many forms. It can be derived from doing something that we love or doing things for the people we love. An act can be considered meaningful if it affects people in positive ways. Meaning gives us a sense that we have purpose in this world. The difficult part is that sources of meaning and the balance among these various sources is different for everyone at different stages in their lives. There is simply no one-size-fits-all approach to having a meaningful life.

Sometimes I wonder if our reliance on extrinsic markers of achievement impedes our understanding of how we experience and create meaning. There is a lot to life beyond getting a job that pays the bills, so being able to make life judgments is a really important skill. Unlike the ones in tests, many questions in life do not have standard correct answers, neither is a majority approach necessarily the right one for an individual. One would have to evaluate and judge for themselves what is truly fitting for them before taking a leap of faith. No matter how much we know and how certain we are of our convictions, there will always be things that we cannot anticipate.

In life, when and how do we take stock? According to the Oxford dictionary, to take stock is to “review or make an overall assessment of a particular situation, typically as a prelude to making a decision.” I recently turned 30. A few days after my birthday, I got an email from the graduate school of my dreams. It says that I have not been accepted and that only 5% of all applicants were selected. I feel happy for those who made it into the program, their dreams live on. However, it is difficult to not feel slightly disappointed at this outcome because it feels at this particular moment that my efforts for the past few years (and if I were to be ludicrous, 30 years of my life) have amounted to nothing. It is easy to wallow in self-pity but it is more meaningful and constructive for me to pull myself together and consider my next steps. In times like these, I personally find it important to be grateful for the journey that I have made so far and the people who are a part of it. Our life stories are woven only in retrospect and I hope that someday, I will see this event as part of a larger unfolding of my life. Life goes on; there is a lot more life ahead of me and I am in for the ride.

]]>
Proxies https://archive.mattelim.com/proxies/ Mon, 04 Jan 2021 17:59:39 +0000 https://archive.mattelim.com/?p=181 Can you recall the last time you counted something? Instead of intuiting our way around the world, we rely on some form of measurement when we deliberate our choices, especially if they are of particular significance. We may weigh the pros and cons to make a personal life decision. In a business setting, managers may draw up revenue projections to justify the costs of new investments. Thus, counting plays an important role in rational decision making. Representing aspects of our decision as numbers and figures can help us to view it in a more objective light. Sometimes, counting can also help us gain a more nuanced understanding. Instead of a world where movies are separated into either “good” or “bad”, we have five-star ratings that give us a sense of the extent to which a critic enjoyed a film. 

We communicate numbers as a natural part of our everyday lives. If someone tells me that they are 1.9 m (6’3”) tall, I know that they have a towering physique. If someone shows me a score of 200 on an IQ test, I may think, “she is either really smart or faking it… maybe both.” However, quantities are not created equal. While it is relatively straightforward to measure physical properties like height, the measurement of conceptual properties like intelligence is far more complicated. Oftentimes, we tend to accept both types of quantities as equally factual when that is not the case. Numbers tend to be communicated in a manner that makes them seem objective and truthful, causing us to be fooled in the process. Perhaps this Jedi mind trick is a by-product of a world where science is regarded as the best descriptor of objective reality. A claim seems more credible if it states a number or quotes some statistics. It comes as no surprise that the presented number is only as good as the methodology that the researcher used to derive it. A recent example of this abuse of numbers is the Texas Attorney General’s claim that Joe Biden’s win of four swing states has a probability of “less than a quadrillion to the fourth power”, which has since been refuted by mathematicians.

We use proxies to count the uncountable. Oxford dictionary defines the word “proxy” as “a figure that can be used to represent the value of something in a calculation.” To use words from this essay, a proxy is a countable approximation of a conceptual property. Let us take the prior example of intelligence. There is no way of physically measuring someone’s intelligence. Intelligence is an individual’s ability to solve various types of problems, which can only be demonstrated when they solve such problems. Neuroscientists may find correlations between the physical structure of the brain and intelligence in the future, but it is important to remember that they are still separate measurements. This is akin to the difference between a person’s muscle-to-fat ratio and their athletic performance — related but distinct. The widely accepted approach for measuring intelligence today is the IQ test. An IQ test focuses on abstract reasoning, meaning that its definition of intelligence is extremely narrow. Alfred Binet, whose Binet-Simon Scale formed the basis of IQ tests today, said himself that such tests are inadequate for measuring intelligence as they do not consider other important aspects like creativity and emotional intelligence.

Another example, one that is close to my heart, is the measurement of learning through testing. Since my days in teaching school, the notion that assessment is one of three key pillars of any teaching practice has been firmly impressed upon my mind. On its own, learning is an internal phenomenon, known only to the learner. Assessment, which often takes the form of tests and examinations, is used as a means to measure if students have learned knowledge and/or skills. It is important to remember that while assessment seeks to represent student learning accurately, it is at best an approximation of that invisible process. The gap between learning and tests has been and will likely continue to be a matter of debate

The impact of proxies often extends beyond the immediate measurement. School examination results impact the wider society by allocating greater educational opportunities to better-performing students. Public education serves to provide equal access to students of all socioeconomic backgrounds, therefore acting as a social-leveler and enabling social mobility. However, recent research has shown that a student’s “social class is one of the most significant predictors… of their educational success.” IQ tests have a particularly dark history due to their ties to eugenicists who, based on a simplistic understanding of genetics, believed “that society should keep feebleminded people from having children.” 

Proxies also affect our understanding of ourselves. Nowadays, it feels like for something to count, it needs to first be counted. There is even a cultural movement known as the Quantified Self, whose tagline reads “self knowledge through numbers”. To increase our self-esteem, we often chase countable goals — Instagram followers, tweet likes, salary, grades — but to what end? Do we question whether or not these numbers are truly meaningful? The use of proxies will likely only increase with time as computers and artificial intelligence become a bigger part of our everyday lives. Behind any recommendation made by a computer is a series of measurements, sometimes defined by a handful of data scientists, computer programmers, and user experience designers, that make assumptions about our personality and desires. This applies to a wide range of interactions, from the ads we are served on Google to the matches we get on a dating app.

Every time we accept a proxy figure, we are relying on an individual, group, or institution’s approach to measurement. Oftentimes, this approach is informed by theories, specific definitions of the measured property, and sometimes value judgments. This renders the proxy figure to not be objectively factual as it is derived from a particular perspective. We need to be careful about the numbers that we come across in our everyday lives. The statistician George Box once said, “All models are wrong, but some are useful.” Proxies are, at their essence, models for approximating abstract quantities. While proxies can be useful, a healthy dose of skepticism should be maintained to ensure that they are working properly and to our benefit.

]]>
Breathing Room https://archive.mattelim.com/breathing-room/ Fri, 07 Feb 2014 10:43:30 +0000 https://archive.mattelim.com/?p=97 This essay was written for an undergraduate philosophy class called “Meaning of Life” in the spring of 2014. The lecturer was Prof. James Yess.

Since Nietzsche proclaimed in 1882 that “God is Dead”, we have seen the demise of Christianity and theism in general, especially within the study of philosophy. The de facto worldview currently is determinism, a philosophy built on the principle that to each effect there is always a cause. Determinism is further nestled within a naturalistic, materialistic reality that states that every single phenomenon in the world is attributed to the interaction of matter, made of atoms and molecules. Within the metaphysical context of materialist determinism, there are various views held by different philosophers, yielding separate and distinct worldviews. Generally, these worldviews belong to two groups, the incompatibilists and the compatibilists. Incompatibilists believe that free will is incoherent with determinism, and compatibilists believe the opposite, that they are not mutually exclusive. This logically follows that incompatibilists like Honderich who believe that an indeterminate self is false, that our actions are caused solely by our environment and dispositions and that an unfixed future cannot occur within determinism.

Ever since religion has been relinquished from a majority of our lives, philosophers have been trying to provide answers to the perpetual question of man’s yearning for meaning and purpose in a universe which is neither sentient nor alive. Among those who take the question sincerely, some of the more uplifting ones come from the existentialists and determinists. In general, they have stated that although life itself has no objective value, subjective value can exist. This subjective value is not found but created. The death of God requires man to take the empty driver’s seat. Instead of God’s will, we now purpose our own wills and pursue them. Man, once a creature, is now a creator. How apt is the description “Homo Faber” in our current paradigm. However, the hard incompatibilist view that Honderich and his colleagues have promoted threatens this outlook. Their belief that there is a fixed future undermines the creative potential that humans have for our future. Instead of owning these wills and pursuits, the hard incompatibilist would strike down their hopes and tell them that they have no part to play in the creation and fulfilment of their desires. The hard incompatibilist would wrongly edify that these are merely illusory, that the person has no part to play in the direction of her life and that her person is merely a combination of dispositions and environmental factors. Herein lies the space of uncertainty, which I term “breathing room”. The breathing room postulates that there is space for man to be a part of the causal process within a deterministic framework. (Within this essay the terms breathing room and space will be used interchangeably.) The exploration of this space seeks to provide an alternate narrative to the claims of hard incompatibilism through uncertainty that man has control of. It expounds a worldview that better resembles the everyday experiences of man. The gap will first be explored within the determinism and then neurology. A hypothesis behind the workings of the gap, and how it ultimately affects human meaning and purpose will be discussed.

The hard incompatibilist claim that the future is fixed is, to me, a very distant conclusion made from the deterministic basis. First, it seems apparent to me that by projecting the future from their deterministic worldview, hard incompatibilists are going beyond the boundary and putting themselves in a position of unnecessary speculation. Determinism shines most through a reflection in retrospect of events and occurrences, but it is meaningless to see its relevance beyond the present. Although it may be true that the understanding of our past can lead to a more mindful approach to the future, this is incoherent to the worldview of the hard incompatibilist. Hard incompatibilists postulate that the future is fixed but cannot be known. Due to our lack of knowledge of this future, we would live in exactly the same way as we do if there are possibilities of multiple futures. To adopt this worldview is to believe that all of our choices are illusory and that there is no way at all that man can have any level of control over their lives. The problem about this perspective though is that, like religion,  it cannot be disproven. To a large extent, it is merely a gross extrapolation of the deterministic worldview. Clearly, if the view that the future is fixed is by itself speculative, how definite is the following statement that our choices or life-hopes, coined by Pereboom, are illusory? Since our future can never be known to us, it is therefore meaningless for us to postulate perspectives that would restrict our outlook, especially ones that could lead to an attitude of passivity in life. The breathing room therefore exists in this not yet determinate space between past and future, where our choices are made and our actions decided upon.

It seems logical that we would have no control at all over our thoughts and subsequent actions if they stem from our dispositions and environmental causes. However, that claim has to be examined further. To enter our decision-making process, environmental factors have to be within the brain network. Therefore, the external factors are first sensed as stimuli that are processed into functional subconscious or conscious information. If a bat is swung quickly towards us, the brain responds by interpreting the fast-moving object as “danger”. Within the brain network therefore exists mental parallels or concepts of  “bat”, “speed” and “danger”. Instead, if it was a soft foam tube swung towards us by a child, concepts evoked within the brain could be “fun”, “squishy” and “safe”. Obviously, within a materialistic context, these mental parallels are physical phenomena most possibly occurring as neurons part of a larger brain neural circuit. Our dispositions are more tricky because they can be confused as both an internal or external factor. A view purporting that it is an external factor presupposes a self that is separate from our dispositions. This view contradicts the general deterministic view that our self emerges solely out of the activities of our brain. As put succinctly by Daniel Dennett, our consciousness arises from the intricate “ratcheting” of our brain. Hence, it seems logical that our dispositions are subconscious internal factors that, when exposed to external factors, come together to cause an action. However, a component that does not defy deterministic limits can be introduced to this system and form part of causal determination. This component could be the thoughts of the conscious self. The belief that our subconscious greatly shapes our eventual actions does not inherently deny the effects our conscious thoughts have in the formation of choices and actions. Determinists like the Stoics and Descartes maintain that we are selves distinct from our dispositions. Pereboom also maintains that nothing in determinism rules out the view that a self can select principles of action and initiate action on their basis independently of the influence of her dispositions and environment. These views validate the possible existence of the breathing room, that choice can exist within a deterministic framework, without even the introduction of compatibilist notions. Instead of Pereboom’s suggestion that a self can initiate action independently, I believe that consciousness, dispositions and environments are all part of a codependent neural system from which decisions are made. This view of the human causal chain empowers people to be active in their decision-making process and not leave all of their choices completely to impulse and chance. Not only is this model of causal determination more familiar to the common man, but it can also be observed from the beliefs of several philosophers. John Dewey, for example, stated that we do not learn from experience, but from the reflection of experience. The reflection process is a conscious phenomenon which enriches our brain’s reward centers and stimulates the learning process, calibrating the ratchets of our brain with the lessons learnt. 

In Man Against Darkness, Stace states that a man’s actions are as much events in the natural world as is an eclipse of the sun. Although I do generally agree with the naturalistic position, I doubt that an eclipse is a good analogy for the processes that occur within our brain. It has been said that there could be more neurons in our brain than stars in the Milky Way. That statement itself is probably enough to show how awesome the three pounds of matter in our cranium really is. The hard incompatibilist is awaiting the day when neuroscience provides all the answers to confirm their position. Currently, neuroscience has not painted us a complete picture of the brain’s workings. How the eventual findings are interpreted is crucial in the standings of current philosophical perspectives. It is therefore in this breathing room of uncertainty that allows multiple versions of determinism to coexist. For an object as intricate as the brain, I am not quite sure if scientists will ever be able to fully comprehend its vast, inherent complexities. In the face of that, scientists therefore create models that can closely represent how the brain works. These models can capture a part of the brain’s functional properties but not entirely. Astronomy is the oldest science and has been around for millennia, but astronomists still use ever-changing models to understand celestial objects and phenomena. Meteorology has been studied for close to a millennium, but until now weather forecasts can only do so much in predicting tomorrow’s weather. Although neuroscience would eventually get closer to understanding the fundamental mechanics of the brain, it might never be able to create a model of the brain that can accurately predict the outcomes of brain function. The unexpected or uncertain nature of its outcomes do not stem from randomness like the kind proposed in Heisenberg’s uncertainty principle, but instead from a totally deterministic, dynamical system as can be seen from, for example, Chaos Theory, where dynamics are extremely sensitive to initial conditions. Until the day that neuroscientists can predict to utmost certainty the entirety of brain function, which arguably would take a very long time, we will never truly understand our ability to choose and affect our own decisions. Therefore, the presence of this breathing room of choice does not conflict with current neuroscientific fact.

Thus far, only the existence of the breathing room has been argued for, but how it affects human meaning and purpose have not yet been discussed. The space enables consciousness to be a part of the choice-making process, therefore providing a certain amount of agency, though limited, to persons. This limited agency allows people to take ownership over their projects, purposes and pursuits. It was discussed earlier on how the brain has mental parallels of physical phenomenon which act as part of the entire neural circuitry. Thus far, we have established that consciousness, dispositions and the environment are on deck for causal determination. Within the brain, these concepts have to be a physically similar entity in order to interact with each other. Each of these concepts is material by nature. Within the current neuroscientific understanding, these concepts are either a distinct or group of neurons that are part of the entire brain network. Essentially, these neurons have the capacity to hold an idea or thought. Philosophers lament the loss of God in our increasingly secular societies, and how that has taken away universal morality and justice. However, to say that we have “lost” God is a misnomer. If God has never been there in the first place, how is it that we have lost her? I argue that what we have lost is the idea of God, and that the idea of God occupies an important space in our brains. Post-theism requires that man’s purpose comes from the aspirations that he has willed. Underlying dreams and aspirations are ideals and values. Without a set of ideals and values, we would not be able to create any purpose or meaning because they have to be put in context. As human beings, we tend to anthropomorphize all that we experience. Every religion therefore has human-like deities and Gods. This can be seen even now, when philosophers call the universe “unfeeling” or “apathetic”, which does not make sense because the use of such terms assumes a human nature in the object. This is equivalent to telling jokes to a rock expecting that it would listen and respond, it is false and illusory. Perhaps our biggest error is in our desire to humanize every single object and experience we encounter. We set up ideals in our brain and through religion, we idolize and consolidate these concepts. The power of the idea of God lies in its absolute perfection. Seen from this view, God is merely a human-like manifestation of the greatest of the greatest great. The loss of God therefore entails the loss of a vision of absolute perfection. However, that does not mean that the vacuum cannot be filled. Perhaps one of the most interesting aspects of human cognition is our ability to understand and communicate abstract ideas. Some of these ideas, like love, is instantly relatable and sometimes visceral to most even though they may not be able to put it well in words. Within our brains, these ideals are kept as pure abstract concepts, untainted by the forces of reality. Unlike Bertrand Russell, I believe that there is an authentic space that our ideals can occupy. Our ideals in the brain are neurons in the network like any other, being able to affect causal determination. Therefore, our ideals, consciousness, dispositions and environment all play a role in the determination of our lives and choices. Through the pursuit and passing on of our ideals, we can have a universal and, at the same time, unique purpose. This creates a narrative at both the grand and personal level. Each person, through communication and chosen actions, pass-on their ideals to the following generation and thus ensures that the goodness of man, and a part of them, exists for posterity. The younger generation, on the other hand, goes through a selection process of removing obsolete ideals and the strengthening of others to fit their newer contexts. Through a reversal, each person has now become a manifestation of their ideals, instead of the traditional opposite which gave rise to idols. Instead of false deities, we now have real-life heroes embodying certain sets of beliefs. 

The problem with this position is that abstract ideas might be less accessible to the uneducated masses as compared to anthropomorphized idols. For that reason, I will never downplay the relevance of religion, especially for those who are born into unfavourable circumstances without a chance for education. That said, the stand taken by this narrative is one that inherently values diversity and a wide variety of different ideals and values.

If we are the only conscious organisms in the world, we are the nervous system, the consciousness of the universe. Hitherto, we are the only beings able to appreciate the vastness of the universe within our brains. Given this powerful starting point, how can the ultimate narrative of man be that of any other species, to merely survive for a brief moment and perish? Most of us, despite this relatively young Godless context, would still aspire to do good. At the point of our death, most of us hope to leave the world a better place. As the late Steve Jobs once said, “We are here to leave a dent in the universe”. The claim is an exaggeration, but we all aspire to be able to affect others and create real positive changes in the world through our lives and actions. As Gandhi has stated, we need to start with ourselves to change the world around us. A hard incompatibilist notion denies completely the possibility of self-changing, which undermines our ultimate belief of making a difference, be it small or significant, in the lives of those who surround us. Even within a deterministic context, when people recognise this breathing room and start to take ownership of their lives they realise that they can truly influence their lives and the lives of others. This allows them to take an authentically active approach to their lives. One of the lessons that can be gleaned from the demise of theism is that no matter how great the promised benefits, once people start to doubt the truth of their belief, it will soon crumble. I think that there is a parallel between that and the illusory mode of living life promoted by several hard incompatibilists. The worst lie one could ever tell is the one told to herself. Through their actions, deeds and stories people become manifest of their ideals, causing them to spread good causes across the human network and allow their ideas to be carried on by the next generation.

]]>