Reframing AI as a belief system

Reframing AI as a belief system

As consumer facing AIs hit the mainstream we have experienced ripples of excitement, awe, anxiety and existential dread as people parse and internalise their potential. Where do the opportunities lie? How will the harms play out and whom will they affect? How will our future selves reflect upon the decisions made today that lead to the entrenched status quos of tomorrow?

// This article is part of ongoing research for my next book into human and organisational sensemaking. It first appeared in Radar #39. //

There are many lenses through which to frame these questions: technological, environmental, financial, societal, regulatory, and so on. My hunch is that it is useful to frame AI models as a form of belief system i.e. a thing that some people consider to be a source of “greater truth”, regardless of whether there is any evidence to support that point of view. The blackbox nature of many current AI models, the increasingly diverse backgrounds and intents of the people engaging with them, and those people’s assumptions made about the models’ veracity, all lend themselves this framing. AI content reflects back the beliefs and biases inherent in their training data and rules, and as such it entrenches (and if widely adopted, amplifies) the status quo. However, for some people, the AI will be considered a core foundation for their belief system, akin to religious texts today. The question of whether an AI has reached the higher level of Artificial General Intelligence (AGI) is a separate one. At this point all that is required is that some humans perceive that it has far greater insight in things that matter to them pertaining to how to live in the communities in which they engage. The human population includes diverse life and educational experiences, vulnerabilities, support structures and gullibilities.


How does a belief system form? Why is this important?

A belief system is largely formed prior to adulthood, when a person absorbs practices, behaviours and assumptions of family, peers and the communities in which they are present. It is updated as that person is exposed to other beliefs and is often challenged by travel to other cultures whose belief systems are significantly different to their own—something that can result in so-called “culture shock”. The assumption that “travel can broaden the mind” assumes that a person is willing to accept the contextual limitations of their beliefs and accommodate other perspectives. However, this exposure can instead reinforce assumptions about their supposed superiority of their own beliefs.

A belief system becomes the incumbent way of framing one's place in the world and—because it is largely unquestioned—takes considerable effort and willpower to unseat it. Beliefs guide how people make decisions, behave, act and any other number of factors that affect societies at large. 

Millions (and soon possibly, billions) of people are interacting with AIs that they perceive as showing some level of “intelligence”. Fundamentally, the less a person knows prior to interacting with the AI, the smarter it is likely to seem, so that some people are significantly more predisposed to be drawn to an AI than others. In most situations the AI models don’t need to be “correct” to be considered for inclusion in a belief system, but merely “plausible”.


Regardless of my personal preference, I’m not convinced that accuracy or “consistently telling the truth” matters for sustained mass-market adoption. As I shared in Radar #37, humans are already adept at functioning with a baseline of inaccurate information because it is hardwired into how we remember, recollect and communicate (a topic that I cover in our Sensemaking For Impact Masterclass). I posited that our current discomfort lies with the shift to AIs (and the people in organisations that develop them) becoming the primary source of that inaccuracy, instead of people or sources they have acclimatised to being the initiator. When enough people believe in an inaccurate version of events, for all intent it becomes the new reality (which is why when combined with generative photos and video “history” is going to be far more malleable than before). 

 

Have we been here before?

What historical precedents might help us understand the future impact of AIs as standalone foundational basis of belief systems? Instead of comparing it to the introduction of a technology such as the printing press, I propose we look to the introduction of Charles Darwin and Alfred Russel Wallace’s Theory of Evolution, which had a profound (for some) and  mundane (for others) impact on the way humans make sense of the world. Introduced in 1859 by Darwin into skeptical scientific and religious communities The Origins of Species By Means of Natural Selection challenged prevailing assumptions of human existence, and spurred people to reconsider hitherto “unique human traits” such as “intelligence” and “wisdom”. Today, a full 164-years after its introduction the Theory of Evolution is backed up by reams of scientific research, led to numerous new avenues of exploration, has attained a mainstream position in many societies, and yet it co-exists the roughly 84% of people globally whose belief system is built on the tenets of faith (recognising that evolutionary beliefs can be in conflict with and/or coexist with these tenets).

Early thoughts on how this plays out

Every culture with an “identity and set of values worth protecting” will want (or will likely mandate) mass-market Generative AIs and LLMs to preserve their cultural values and prevailing belief system. Currently this is largely framed in terms of the countries where the AI companies are based e.g. US or China, but outside of well resourced nations expect these models to coalesce around shared values. 

Ultimately, every consumer facing AI model that supports prolonged, deep interaction e.g. taking the role of a life-stage or life-long personal assistant, will be forced to take a position on substantive issues that support or challenge their users’ belief systems. If the AI is set up to avoid taking a stance, users will gravitate to more culturally aligned models where interaction requires less cognitive friction. Ultimately this becomes an intense version of the internationalisation versus localisation discussion, because cultural values are more readily surfaced compared to most services. For AI providers with market dominance the question for deep interaction AIs will be whether to stick with:

  1. a single overarching model with a coherent set of baked in beliefs or,
  2. to support multiple models, each with its own set of values and beliefs, 

Long term I don’t anticipate an AI model being able to adopt opposing positions on matters of substance for each user because the inherent tensions are a political and nation-state minefield. 

If your reaction to “AI being the basis for a belief system” is to dismiss it as the ramblings of a techno-utopian, consider instead how low that benchmark needs to be for this adoption to occur.


Bootnotes:

  • It should be clear from the tonality of the article, I don't ascribe to the idea of AI-as-belief-system myself. However, there are or will be people that do, which is why the conversation needs to extend beyond AI-as-tool (thanks EO for the clarifying questions). Additional notes and follow-up conversation here.
  • Much of what forms a belief system is based on observation of real-world situations, so we need to consider the interplay between offline/online worlds, and recognise that more of life is gradually shifting online.
  • The pace layers framework is useful to think about the value to societies of faster and slower moving layers and what occurs when there is a seismic shift that changes the pace of the slower moving layers that encompass societal values.
  • I'm assuming there will be high numbers of unique LLMs but it's unclear how diverse the number and types of mainstream LLMs will be in the long term, impacted by e.g. nation-state, civic and financial investments.
  • Although personal assistant AIs can be tailored to each user, providers will still be confronted with the politically charged issues of filter bubbles and belief silos.
  • In one legal scenario the systematic scraping of information to train specific AIs will prove to be the largest intellectual property theft in human history. Curious which newsletters are being scraped for training data.
  • There is a tendency to assume that classic texts such as The Origins of Species were published with ideas fully fleshed out, whereas the publication still includes considerable room for debate. We can acknowledge that partially formed theories (and partially formed AIs) have the potential to grow into prevailing belief systems. 
  • This essay forms a small part of research for my next book on human and organisational sensemaking, and was seeded from reading Darwin's The Expression of the Emotions in Man and Animals. The role of emotional expression in human sensemaking will probably be covered in a future issue or Radar.
  • Finally, as a rule of thumb, every time someone uses the word “insane” or “mind-blown” to describe AI output, replace it with “based on my limited experience and imagination”. 

Photo: research in a train carriage passing through Ahmedabad, India.

Back to articles