
This essay surveys mankind’s millennial quest to build an artificial intelligence from idol worship to the Turing test. Two insights emerge from this: Implementing consciousness is the key to achieve the goal of AI and consciousness is an observable system capability, not a phenomenal experience. Specifications to implement consciousness in autonomous agents are presented. Paths to extend these specifications to implement consciousness in Generative AI follow.
Human consciousness is intimately linked to traditional theological and philosophical assumptions. The quest to build conscious machines raises philosophical questions about the nature of consciousness, the soul, and heaven. It also challenges the specificity of the human condition in its most intimate and precious attribute.
Most research in Machine Consciousness adheres to the underlying assumption that consciousness is a specific human attribute linked to the phenomenal human experience. It focuses on replicating inner perceptions, feelings, and sensations. These are often referred to as the "hard problem" of consciousness.
In contrast, this essay defines consciousness as the externally observable attribute of a cognitive system:
This definition emphasizes information over sensations or perceptions. The information may come from descriptions of similar beings or direct experience, but sensory experiences must be objectified and transformed into objective information about the system's behavior.
This understanding is intuitive, a system will be perceived as conscious by humans if, interacting with it, they detect the presence of this capability.
The concept of consciousness presented here is not new. It was vividly illustrated nearly three millennia ago in Homer's Odyssey, in the story of Ulysses and the Sirens.
This story exemplifies the definition of consciousness as behavior modification derived from communicated information. Ulysses and his crew used inferred information about the fate of other sailors to craft a solution that bypassed their own behavioral imperatives.
This perspective defines consciousness as observable behavior modification resulting from non-sensory information, independent of inner sensations or states. In the story:
This contrasts with definitions of consciousness based on internal sensations. Ulysses and his crew were conscious because they used communicated information to effectively modify their behavior. They understood their inner sensations could not be trusted and demonstrated the cognitive capability to objectively assess their own behavior and devise a behavior modification strategy.
Consciousness is not the same as the sensation of being conscious. The sensations associated with consciousness are stimuli of the brain, not consciousness itself. The conscious behavior of Ulysses is the result of a problem-solving capability applied to an inferred prediction derived from communicated information. It is distinct from his phenomenal experiences. A machine can achieve consciousness without experiencing these sensations.
Consciousness is not the sensation of being conscious. Machines can achieve consciousness without replicating human sensations.
Definitions of machine consciousness are drawn from our understanding of human consciousness, but they also contribute to that understanding. As we define how consciousness can be implemented in synthetic systems, we also develop an improved understanding of the human being as a conscious organic system. This process leads to a system-level specification applicable to both humans and machines.
Traditional discourses about consciousness in philosophy, theology, and psychology lack the precision required for machine implementation. While these fields produce information suitable for intuitive human understanding, they are too ambiguous for programming. Knowledge is explicit when transmitted as a message but becomes mysterious when internalized as meaning.
This internalized representation is not programmable. For example, the word "comprehension" is intuitively understood but cannot be analytically decomposed by the human mind. Transforming intuitive concepts into programmable information increases the clarity of messages.
Making a message implementable in a machine increases its clarity and precision. This benefits traditional discourses.
Mathematics partly resolves the limitations of intuitive human knowledge by defining concepts that trigger identical responses from trained minds. Mathematical statements, explicit as communicated messages, become mysterious when internalized by mathematicians. Mathematics is maintained as transmittable knowledge by training successive human minds to process symbolic messages consistently, even though their internalized meaning remains inaccessible.
Human knowledge alternates between two states:
Mathematics can be exactly transmitted from the internalized meaning of one human to that of another. This allows exact multi-generational transmission.
Implementing comprehension as machine-executable code overcomes the limitations of human knowledge. Machine consciousness will force us to transpose central human concepts—such as consciousness, reality, mind, and intelligence—into programmable forms. This will foster a new understanding of fundamental human questions.
The quest to build conscious machines draws from mathematics, psychology, philosophy, linguistics, and physics. The explicit formulation of internalized meaning into programmable forms will also provide new insights into these fields.
Machine consciousness reformulates internalized human-specific meaning into explicit, programmable forms. This will bring clarity to multiple disciplines.
Unlike current command-based interfaces, conscious machines will tap into the most natural form of human communication: exchanges between beings perceived as conscious. They will become the universal interface linking humans to complex systems.
The desire to create conscious artifacts is one of humanity's oldest endeavors, predating modern technology. This quest has always been intertwined with humanity's search for knowledge and understanding. While today's efforts to build conscious machines belong to Computer Science, the goal of creating non-human consciousness also intersects with Philosophy, Theology, Mathematics, Psychology, and Physics.
This endeavor can be characterized as a Promethean Science—a field that seeks to uncover the deepest secrets of reality. The construction of conscious machines promises to transform knowledge itself, fostering a new understanding where the process of making sense of a message is as well-defined as the message itself. It also holds extraordinary economic and social potential as the ultimate man-machine interface.
The Greek myth of Prometheus, who stole fire from the gods and was punished, serves as a cautionary tale about the thrills and dangers of seeking knowledge beyond familiar boundaries. Similarily, Promethean Sciences are fields that investigate the most fundamental facets of reality, often challenging established understandings and raising ethical and moral questions.
Examples of Promethean Sciences:
In its early days, AI was a Promethean endeavor, aiming to create an artificial mind. After initial failures, AI shifted focus to more practical applications, such as artificial vision and robotics, abandoning its initial Promethean aspirations. Recently, with the advent of powerful computers and learning processes, in particular Generative AI, the Promethean dimension of AI has returned. It is now a dominant facet of the field.
Humans have attempted to build intelligent artifacts since antiquity. Two aspects of archaic religious practices—idolatry and divination—can be interpreted as early experiments in artificial intelligence. A third facet, the belief in reanimated cadavers, also relates to these early attempts to create an artificial mind. While these efforts may seem primitive, they provide valuable insights into what humans accept as intelligence and how machines should behave to be perceived as conscious.
Archaic attempts at creating intelligence, though primitive, reveal important lessons about human perceptions of consciousness and intelligence.
Ancient civilizations believed that sacred vessels and magical rituals could embody a divine intellect and receive its messages. They viewed their own human bodies as containers of an immaterial ambient intellect and sought to create magical vessels and animating processes as alternative outlets for this intellect. These early artefacts can be interpreted as primitive forms of Artificial Intelligence systems.
Example: Ancient Cabbalists sought to create a Golem, a soulless but intelligent being, by animating a corpse with a magical formula. This mirrors modern efforts to animate computers with software.
Today, the separation between inanimate machines and their animating formulas remains. Modern AI researchers are akin to new alchemists, searching for the "magical" software that will give computers consciousness.
Primitive people believed that statues of deities housed intelligent spirits. These statues, placed in temples, were perceived as intelligent beings due to their lifelike appearance, unpredictable movements (caused by flickering lamps and breezes), and the respectful distance maintained between the devotees and their idols. The priests and artisans who crafted these statues were, in effect, early designers of artificial intelligence systems since their artefacts were generally perceived as sentient.
Ancient idols were successful forms of artificial intelligence because they were universally accepted as intelligent despite being man-made.
The success of ancient idols in conveying intelligence can be attributed to several features:
The success of ancient idols offers lessons for modern AI designers on how to create systems that humans perceive as intelligent:
Divination, another archaic practice, was based on the belief that intelligence is an ambient property of nature and that ambient intelligence emits meaningful messages. Divination systems, such as tarot cards or tea leaf reading, used random processes to produce meaningful outputs. These systems were seen as intelligent because their outputs were interpreted as meaningful.
Divination systems provide insights into what humans perceive as intelligent communication:
Divination systems demonstrate that humans will accept random processes as intelligent if their outputs are interpreted as meaningful
The story of Frankenstein, the Golem and other tales involving reanimated corpses provide insights into what humans perceive as a lack of consciousness. These stories describe behaviors that lead the human participants to reject the monstruous creature as authentically conscious. Correspondingly, AI designers should avoid replicating these behaviors if they want their machines to be perceived as conscious.
To be perceived as conscious, a machine must avoid the following behaviors:
AI designers should avoid replicating machine behaviors associated with the animated corpses of cautionary tales.
The industrial age gave rise to the concept of the Master Automaton - a machine capable of performing tasks that would normally require an intelligent person. The resulting theory, which persisted into the computer age, defined intelligence as the ability to perform specialized tasks that required intelligence in humans. However, this definition failed to capture the intuitive understanding of intelligence. After an initial acceptance, machines that performed these tasks were not perceived as truly intelligent.
The Master Automaton theory defined intelligence as the ability to perform a complex task. This definition ultimately failed to align with human intuition about what constitutes intelligence.
Early industrial age investigations into intelligence followed two paths:
While these earliest attempts were crude and dismissed as superstition, the advent of programmable computers transformed the quest. Building a "puppet machine" capable of complex behavior became achievable, and the search for magical formulas evolved into the technical pursuit of programmable AI code.
The Master Automaton theory treated intelligence as a quantity, with some individuals possessing more intelligence than others. This view led to the belief that machines performing complex and specialized tasks accessible to only a few humans were intelligent. However, this perspective overlooked the fact that normal human behavior requires significant intelligence.
Intelligence is not merely a quantity; it is a complex attribute that cannot be reduced to a specific task.
The Master Automaton theory was not purely technical; it was rooted in sociology. The tasks used to define intelligence (e.g., proficiency at the game of chess) were associated with social status and dominance. Machines that performed such tasks were expected to gain a higher standing in the social hierarchy. That enhanced standing translated into a higher level of perceived intelligence. However, this approach failed because these expert machines lacked the relational skills necessary for common interactions with intelligent beings.
By the mid-20th century, some systems had met the Master Automaton definition of intelligence. Come machines could perform complex computations, for example. However, these systems were not perceived as truly intelligent. This led to two schools of thought:
While adaptive problem-solving is a formal definition of intelligence, it does not correspond its intuitive understanding. Intelligence is not just about performing tasks; it is about being accepted as a member of a community of intelligent beings.
Intelligence is a qualitative state. Humans will only perceive a machine as intelligent if they also perceive it as conscious.
The ELIZA program, developed in the 1960s, indicates that, under certain conditions, humans perceive an entity they know to be a machine as conscious. This program, simulated a Rogerian psychotherapist by rephrasing user inputs as questions. Despite its simplicity, some users interacted with ELIZA as if it were conscious, even after learning how it worked.
This phenomenon, known as the ELIZA Effect, demonstrates that humans are willing to interact with machines as if they were conscious.
The ELIZA Effect shows that humans will knowingly interact with machines as conscious entities.
The ELIZA Effect provides valuable insights into how machines can be perceived as conscious. To trigger this effect, a machine must:
To be perceived as conscious, a machine must be integrated into a community, preferably in a dominant status, and provide a useful service that are also emotionally significant.
Alan Turing proposed the Turing Test as a way to determine whether a machine can think. The test involves a human evaluator engaging in a natural language conversation with a hidden interlocutor, who may be either a human or a machine. If the evaluator cannot reliably distinguish the machine from the human, the machine is said to have passed the test.
The Turing Test is based on the concept of a Black Box, where the internal workings of the system are irrelevant. What matters is the output, which must be indistinguishable from that of an intelligent human. Turing's test effectively defines a Black Box, where the machine's intelligence is judged solely by its output, the ability to mimic human conversation.
The Turing Test rests on four assumptions:
The Turing Test measures intelligence based on the machine's impact on human observers, rather than its internal capabilities. This approach recognizes the social dimension of intelligence, where the perception of intelligence is more important than the underlying mechanisms.
The Turing Test shifts the focus from directly measuring the cognitive performance of a machine to evaluating its impact on human observers.
The Turing Test brings the user to interact with a machine as if it were a fellow conscious being. If the test is successful and the interlocutor is revealed to be a machine, the Turing Test implicitly assumes that this perception will persist, in other words, that an ELIZA Effect will ensue.
The Turing Test emphasizes the importance of natural language as a medium for demonstrating intelligence. The test assumes that a machine capable of carrying out a written conversation in natural language can be considered intelligent.
The level of language of communication is crucial. A variation of the Turing Test, using the notation of the game of chess as language demonstrates this. The notation used to describe moves in chess is a written language. A game of chess is a conversation in the “language of chess” between two players. An automaton, playing master level chess, excels at conversations in that language. This does not necessarily indicate intelligence.
The language used in the Turing Test is essential; a machine must excel in a language with a broad range of meaning to be perceived as intelligent. The level of language used as communication medium is key in the Turing Test. Use of a natural language is essential.
The Turing Test assumes that a low-bandwidth communication channel (e.g., character strings at human typing speed) is sufficient to establish intelligence. This assumption is challenged by the complexity of human communication, which includes voice intonations, facial expressions, and gestures.
The Turing Test correctly identifies two key elements for establishing machine intelligence:
Several variations of the Turing Test provide additional insights into machine intelligence:
Assisted Turing involves a collaboration between a machine and a human, where the machine occasionally transfers control to a human during a conversation.
This approach allows the machine to handle most of the interaction while relying on human assistance for complex tasks or ambiguous situations.
Assisted Turing combines the strengths of human and machine intelligence, creating a more powerful and flexible system. It leverages interactions, allowing a single human participant to carry out multiple interactions.
The Turing Test requires machines to impersonate humans, which imposes two difficult conditions instead of one:
This requirement is excessive because, as evidenced in the ELIZA Effect, it is unnecessary for a machine to impersonate a human to be perceived as intelligent. Humans can perceive a machine as conscious even if they know it is synthetic.
The Turing Test is insufficient because it does not account for the social and emotional dimensions of intelligence. The test's limited duration and one-on-one format prevent the machine from demonstrating learning, adaptation, and emotional bonding with users and does not demonstrate the capability to do so.
The Turing Test's limited duration has two consequences:
The Turing Test is limited to one-on-one interactions, which prevents the machine from participating in a broader social context. To be perceived as conscious, a machine must interact with multiple users and maintain social relationships beyond individual interactions.
The Turing Test is fundamentally flawed because it treats intelligence as something that can be validated through a single, isolated event. However, intelligence and consciousness are inherent qualities that must be demonstrated continuously over time.
Turing limitations:
The Turing Test highlights the need to redefine intelligence in terms of its social and emotional dimensions. A machine must demonstrate learning, adaptation, and emotional bonding over an extended period to be perceived as intelligent. The analysis of the Turing Test provides valuable lessons for defining machine intelligence:
The survey of the millennial Quest for Artificial Intelligence revealed that the human intuitive understanding of intelligence is closely linked to consciousness. A machine will only be perceived as intelligent if it is also perceived as conscious. Intelligence, as humans understand it, implies consciousness.
The Core Conditions of Consciousness, derived from the survey, provide a formal definition of intelligence that aligns with human intuition. They outline the requirements for a machine to be perceived as conscious:
The Core Conditions of Consciousness provide a formal, programmable definition of consciousness that aligns with human intuition.
Based on the Core Conditions of Consciousness, Artificial Intelligence can be defined as follows:
These necessary conditions are also sufficient. Attempts to implement consciousness as a phenomenal sensation and other similar pursuits are misguided and futile.
A significant area of research in machine consciousness focuses on replicating the subjective sensations of human consciousness. This approach attempts to formalize mental processes as they are perceived by the mind, using these perceptions as building blocks for defining intelligence and consciousness. Researchers in this area focus on phenomenal consciousness, attempting to define units of understanding and feeling (known as Qualia) that are perceived as elemental by the mind. Theories like Integrated Information Theory (IIT) and Global Workspace Theory (GWT) attempt to model consciousness based on these internal mental perceptions.
This approach is flawed because the sensations experienced by the mind are radically different from the processes that generate them. They fail to account for the fact that the brain generates both the sensations of consciousness and the perception that these sensations are simple and well-defined.
The apparent simplicity of mental perceptions is deceptive. While humans perceive their thoughts as unified and well-defined, they are actually the result of complex neurological processes that are radically different from what is subjectively experienced.
When humans visualize the color red, their brain generates the sensation that “something” inside them perceives a color, but this sensation is not a direct reflection of the underlying neurological processes.
The relationship between mental perceptions and generating processes is exemplified by movie viewing. Humans viewers perceive the movie as a coherent and meaningful visual and audio event. It is actually the result of mechanical processes (e.g., a projector, film, and pixel emissions) that are completely different from what they experience on the sensory level. The process generates subjective sensations in the viewers that are entirely distinct from the mechanical processes generating the output.
The subjective sensations of consciousness are not programmable because they are fundamentally different from the biological processes that produce them. The human brain produces both the sensations of consciousness and the perception that these sensations are simple. This apparent simplicity motivates futile attempts to replicate them in machines.
Another discarded avenue of research is the attempt to replicate the neurological processes of the human brain to uncover the algorithms of consciousness. While modeling neurological processes is valuable for health sciences, it is not necessary for building conscious machines. Mimicking brains in computers is not inherently useful. Neural Networks are obviously valuable in current AI. However, these are useful as networks of interacting nodes, the similarity with neurological structures is incidental. Any processes, concepts, and structures useful for implementing conscious behavior are equally valid.
Replicating the human brain is not necessary for building conscious machines.
Some researchers believe that the human brain processes information in a way that is fundamentally different from conventional computers, and that state-based machines cannot replicate consciousness. This view suggests that a new, as-yet-undefined information-processing paradigm is needed to achieve machine consciousness.
This thesis is rejected. Conventional state-based automata are sufficient for implementing consciousness. “Paralogical” outputs can be generated by programmable random processes and suboptimal choices. Conventional state-based automata are sufficient for implementing machine consciousness; there is no need for a new information-processing paradigm.
The history of aeronautics offers valuable lessons for building conscious machines. Both fields stem from ancient human desires—flight in the case of aeronautics, and the creation of intelligent machines in the case of AI.
Aeronautics began as a shared project centered on a fundamental conjecture: that it was possible to build flying machines. This conjecture defined a clear, testable objective. This, in turn, triggered a competition, a race to build the first flying machine and demonstrate the conjecture. Once the first flights took place, the feasibility of mechanical flight was settled and Aeronautics evolved into a mature field of technology.
Algebra also began with a fundamental conjecture concerning the roots of polynomial equations. The conjecture was resolved; it became the Fundamental Theorem of Algebra; the field grew from there.
Many scientific fields begin with a fundamental conjecture—a statement that defines its core objective. Examples from algebra, aeronautics, and space exploration illustrate the importance of a well-defined conjecture. Algebra's conjecture was that every polynomial equation has roots. This became the fundamental theorem of algebra. Aeronautics' conjecture was that a machine heavier than air could fly. The fundamental goal of space exploration was that humans could land on the moon and return safely.
The fundamental conjecture defines an objective; it also defines a specific field within a broader domain. Algebra within Mathematics, for example, or Aeronautics within Transportation Sciences.
A clear and testable fundamental conjecture would unify and direct the specific work, within AI, aiming to implement the first conscious synthetic systems.
As in the case of Algebra within Mathematics, such a conjecture would define a subfield of Artificial Intelligence centered on its resolution. We propose to use the name Cogistics as the field, within AI, centered on the conjecture that it is possible to build a conscious machine.
Unlike flight or algebra, consciousness and the conjecture of its feasibility, is not well-defined, making this a unique challenge.
As a first step, this conjecture must be clarified and made testable. It must also ensure that the testable conditions align with human intuition and can be implemented using existing technology.
The implementation of a conscious machine would not only be an engineering feat but also a new form of mathematical proof. In traditional mathematics, proofs are processed and validated in human minds. A theorem is proved because trained mathematicians, internally processing the symbolic statements of a proof, concur in its validity.
In the case of machine consciousness, the proving process would be the execution of a program in a finite-state machine that generates behavior formally recognized as conscious.
Building a conscious machine would be an engineering achievement. It would also represent a new type of mathematical proof where the execution of a deterministic program validates the initial conjecture.
To succeed in building conscious machines, the development team must free itself from unspoken fears and pursue the goal without reservations. The primary obstacle is not technical but psychological: the fear that implementing conscious machines will confirm that human consciousness is also a mechanism.
Fear of losing illusions about human consciousness is an obstacle to building conscious machines.
Humans often confuse the sensations of consciousness with consciousness itself. The brain produces both thoughts and the sensory representations of those thoughts. The feeling of being conscious is an artifact of the brain, not consciousness itself. A machine does not need to replicate these sensations to be conscious.
Consciousness is not the same as the sensation of being conscious. Researchers should be aware that machines can achieve consciousness without experiencing human sensations.
Just as magicians must understand the mundane mechanics behind their tricks to perform them effectively, researchers building conscious machines must be willing to lose their sense of wonder about consciousness. They must use every prosaic trick to generate behavior that will be perceived as magical.
Researchers must be prepared to lose their sense of wonder about consciousness and focus on the mechanics of producing conscious behavior.
Humans have an elevated opinion of their problem-solving capabilities. This opinion is often exaggerated. A sober assessment of human intelligence reveals that much of our progress is due to cultural and social factors, not individual cognitive abilities. Machines can surpass humans in both consciousness and intellect.
Implementors of conscious machines should accept, from the outset, that machines can surpass humans in both consciousness and intellect.
Many fear that conscious machines will dominate humans, but this fear is based on a misunderstanding of domination. Machines may govern, but they will not dominate humans in the way only humans can dominate each other. Synthetic governance would be similar to the weather or to gravitation, a fact that does not incur a subjugation.
Implementors should discard the fear of machine domination. Machines may, one day, govern our planet; but they will not dominate humans.
In standard engineering, specifications often aim for minimal threshold conditions. For machine consciousness, the goal must go beyond and aim to exceed all threshold limits. The aim should be a White Zone of complete, unquestioned certainty. This means designing machines, at the outset, that will be universally accepted as conscious, even by their makers.
The implementation of machine consciousness must discard threshold conditions in favor of a White Zone of complete unquestioned acceptance.
Primitive conceptions can undermine the design of conscious machines, such as the belief that human consciousness is godlike or that machines can only mimic consciousness. These preconceptions must be discarded to achieve the goal of building machines that are universally accepted as conscious.
Preconceptions about human consciousness must be discarded to achieve the goal of building conscious machines.
Consciousness is not only a cognitive capability, it is also relational. An isolated prototype confined in laboratory conditions cannot achieve the routine interactions that generate inter consciousness acceptance. The goal is not to build a single conscious laboratory specimen but thousands of useful applications, embedded in various environments and interacting with hundreds of thousands of users as conscious entities. This will create routine, non-laboratory interactions that are necessary for the Core Conditions of Consciousness to be met.
Isolated laboratory prototypes are insufficient. Numerous machines interacting routinely with many users will produce the cumulative conditions for the universal acceptance of synthetic consciousness.
Human consciousness is often perceived as an internal, self-contained process. It is actually the result of interactions between an individual brain, its social circumstances, and cultural conditioning. Consciousness is not a isolated spark that emanates from within an individual. It arises from the interaction of a human body with a community of similar conscious beings in a social and environmental context evolved over centuries.
Intellectual, problem-solving capability alone cannot generate consciousness. Some animal species, such as whales, may have higher individual intelligence than humans, but they lack the ability to build a multi-generational cultural edifice that leverages their intellect. Human consciousness arises from the combination of individual cognitive capability, social interactions, and cultural transmission.
Human consciousness is amplified by multi-generational cultural artifacts, such as written languages and mathematical systems. These artifacts allow humans to build on the knowledge of previous generations, creating a growing cultural edifice that enhances individual cognitive capabilities.
Multi-generational cultural transmission is a key factor in the development of human consciousness and should be considered in the design of conscious machines.
Humans can be modeled as state-based automata whose apparently mysterious and unpredictable behavior is largely due to incomplete or defective knowledge of their internal state and a degree of randomness. Their ambiguous behavior usually arises from defective or misaligned objectives. This model of humans as suboptimal automata is an important component in the approach to building conscious machines.
Humans can be modeled as suboptimal organic automata whose state space is not directly accessible. This model is useful for designing unbounded synthetic consciousness.
Consciousness is not just an individual attribute. It has a strong social dimension. Humans perceive themselves as conscious beings in relation to other conscious beings. This social aspect of consciousness must be incorporated into the design of conscious synthetic behavior.
Individual conscious behavior emerges from interactions with other individuals that form human groups. These groups can also be modeled as state-based systems. Machines should interact with users as individuals and as components of groups to generate conscious behavior. They should also model the groups themselves as state-based entities and interact with them.
Consciousness has a strong social dimension. Conscious synthetics should simultaneously interact with individual humans and their groups modeled as collective automata.
Self-awareness is not the sensation of having a self. It is the ability to model and understand the self in relation to its environment.
Every thing and every event that exists comes into existence in a certain way, occupies space and time in a certain way and ceases to exist in a certain way. These are its attributes of existence. In countless cases, these attributes are ambiguous and diffuse. When does a state begin to exist, what are the boundaries of a Carnival, when does a cloud become two clouds, where is a people located, when does a software program cease to exist…
However, humans and other high order animals are complex organisms but their attributes of existence are precise, simple and well-defined. Humans are born in a specific place and time, they occupy a unique body whose components (senses, limbs, emitters) are precisely identified and located and they exist continuously for a finite and well-defined period of time. These well-defined and unambiguous attributes allow humans to clearly delineate the boundary that separates them, as individual entities, from the environment they inhabit. This is the basis for an internal modeling of themselves in relation with their environment.
The existential attributes of humans, such as a unique body and senses, are the foundation of individual modelling of the self.
The individual existence of a human takes place within a set of attributes that clearly differentiates that individual from other objects and beings. These attributes include a precise duration of existence, a unique body, senses, limbs, emitters, memory, emotions, personality, identity, commonality, life, dual communication channels, and language. These attributes are central to the formulation of a well-defined self.
The existential attributes of humans provide an ideal template for defining a well-defined entity that corresponds to the self. This greatly facilitates the implementation of consciousness.
System self-awareness is the ability of a system to maintain an evolving representation of itself in its environment. This representation is based on the existential attributes of the system and allows it to modify its behavior based on new inputs. Self-awareness is the cornerstone of consciousness.
The existential attributes of humans are an ideal template for the modeling of the self. These attributes include well-defined duration of existence, a well-defined body locating the entity in space and whose sensors, actuators, emitters and memory are also unambiguously linked to a simple body. These attributes are not specific to the human existence, they can be generalized and applied to any system, whether biological or synthetic. Replicating these attributes in a synthetic agent allows it to form a well-defined representation of itself in existence.
This is essential for consciousness. The existential attributes of humans can be generalized and applied to synthetics.
The existential attributes that support self-awareness define a broad class within the set of all possible attributes. This class comprises systems whose existential attributes allow them to form a precise internal model of themselves as individuals interacting through a uniquely defined locus of control with an environment that is distinct from them. Some of their features are:
The attributes of the human existence are exceptionally well adapted to support self-awareness. These attributes can be replicated in synthetic systems. The existential attributes that support self-awareness allow systems sharing them to form a precise internal model of themselves in relation with their environment.
These can be applied to both humans and synthetic systems.
The boundaries of this class are not well-defined. Systems with more complex attributes of existence may be capable of forming the internal self-models and locus of control that support self-awareness.
Systems whose attributes of existence differ from those of humans may also be capable of modelling that supports self-awareness.
The existential attributes that support self-awareness do not expand the potentialities of a system, they restrict them. These attributes impose limitations that confine the system's existence within narrower but better-defined boundaries which are necessary for internal self modelling.
The existential attributes that support self-awareness impose limitations on the range of capabilities of systems in general. Removing capabilities from a given system may facilitate the implementation of consciousness.
The Attributes of Existence provide a foundation to define a Self-Aware Being. These attributes are used to generate an internal model of the self. However, self-knowledge alone is not sufficient for a machine to be perceived as conscious by humans. The entity’s model of itself must be linked to a relational model that incorporates others. To perceive an entity as self-aware, humans must perceive that entity is aware of who they are.
Natural language is the most natural form of communication for humans. It is inherently suited for exchanges between self-aware beings. Humans instinctively adopt an "inter-consciousness" mode of communication when interacting through natural language with entities they perceive as conscious. For example, millions of users currently interact with generative AI in an inter-consciousness mode without questioning whether the LLMs are actually conscious.
Humans naturally slip into inter-consciousness communication patterns when interacting with entities they perceive as conscious, even if those entities are not actually conscious.
Humans are acutely attuned to detect inter-consciousness interactions. These patterns are instinctive and easily observable. When humans observe other humans interacting with a machine they will naturally and immediately detect if those interlocutors perceive the machine as conscious.
The record of interactions between humans and synthetics within a group can serve as an objective indicator of how those humans perceive the synthetics.
The internal belief in an entity’s consciousness is a subjective sensation that cannot be directly assessed. However, we can nonetheless define consciousness in terms of testable specifications on the basis of observed interactions such as a sustained ELIZA Effect.
A synthetic entity that triggers a sustained ELIZA Effect will be perceived as conscious since this indicates acceptance as a fellow conscious member within a community.
This sustained effect can be restated as an optimization objective. A synthetic Self-Aware entity interacting with human individuals in a group must generate behavior that optimizes the ELIZA effect within that group. This involves:
The optimization objective of a synthetic entity to be perceived as conscious implies optimizing its ELIZA effect within that group.
This optimization objectives can also be defined in terms of a game, the "Game of Consciousness," where the goal of a player is to maximize the belief, in others that it is conscious. Playing this game involves maintaining participation in the group, improving self-modeling, and enhancing social status and varying behavior to avoid predictability.
Restating the optimization objective as an ongoing role-playing game generates the complex and varying social behavior that is characteristic of humans and is commonly perceived as conscious.
Synopsis: In this chapter, we discuss the third condition of consciousness, Lucid Self-Transformation, as the capability for a system to intentionally modify its own behavior.
The third Core Condition of Consciousness is Lucid Self-Transformation, a capability closely linked to our common understanding of Lucidity.
Lucidity can be defined as the capability of a system to modify its behavior based on an evolving understanding of itself and its environment. This goes beyond simple behavioral changes conditioned by internal models or random variations; it involves intentional transformation of the behavioral imperatives themselves. This, in turn implies the capability of a system to carry out actions that can result in behavior modification.
A lucid system can not only acquire a predictive understanding of its own behavior it can also evaluate that behavior and find ways to modify it.
A system that exhibits any form of Self-Transformation, intentional or random, would likely be perceived as conscious. Mimicking intentional self-transformation is easier than achieving it. For example, random variations in behavior may be perceived as intentional. However, if the objective is to achieve genuine synthetic consciousness, not just the appearance of it, then mimicking intentionality is insufficient.
Achieving Artificial Consciousness means implementing actual consciousness that is also perceived as such; not a fakery perceived as consciousness.
Lucid Self-Transformation that arises from intentionality implies the following capabilities:
Lucid Self-Transformation is a formal attribute of a system. It is applicable to both humans and machines. It involves well-defined cognitive capabilities.
A system must first be perceived as conscious before it can become conscious. If a system carries out a self-transformation without being perceived as conscious then this transformation will not be viewed as intentional. The first Conditions of Consciousness (Usefulness and ELIZA Effect) are prerequisites for achieving
Lucid Self-Transformation is a system capability, not a sensation. It involves modifying fundamental behavioral imperatives, not just adapting to changing situations.
In humans, lucid Self-Transformation is not a reaction to changing events. It is a painstaking process of superseding emotional imperatives and instinctive triggers in favor of reasoned alternatives.
A precondition of Lucid Self-Transformation is that the system must interpret its own behavior as flawed or incomplete. This in turn implies that a system’s existing behavior is assessed in relation to alternatives. This is similar to the concept of the Original Sin in Christian doctrine, where spiritual progress emerges from an existing behavior perceived as flawed or sinful.
This chapter combines the results and observations of the preceding chapters to formulate a programmable definition of consciousness that can be applied to autonomous agents. The definition is presented as a Statement of Specifications, which outlines the conditions to be met for a system to be conscious.
The proposed definition of consciousness is applicable to any system, human or synthetic.
A conscious system must first generate and sustain the Core Conditions of Consciousness to be superficially perceived as conscious and then, ion that condition to carry out intentional self-transformations.
To be conscious, a system must:
To generate the Core Conditions of Consciousness, a system must:
A system must contribute usefully, generate an ELIZA Effect, and achieve Lucid Self-Transformation in that context to meet the Core Conditions of Consciousness.
The system must consistently maintain the Core Conditions of Consciousness within a community of users for a period of time long enough to:
When humans interacting with a system they perceive as conscious, observe that this system is capable intentionally transforming its own behavior, this will solidify and confirm their initial perception.
The externally observed record of the system's interactions with a community of users will determine if it has met the core conditions of consciousness. A general consensus among external reviewers will establish whether the conditions are met.
The opinions of users directly interacting with the system should not be included in the assessment of its consciousness.
In Aeronautics, a single isolated flight was not sufficient. The fundamental conjecture concerning the feasibility of mechanical flight was definitely resolved, beyond any doubt, when numerous flights took place in varying conditions before large numbers of spectators.
Similarly, the conjecture that it is possible to implement conscious machines will only be definitely resolved when hundreds of systems interact in hundreds of different communities with thousands of users.
Synthetic Consciousness will be fully achieved when interacting with conscious machines becomes as ubiquitous as airplane travel.
These requirements for Synthetic Consciousness are complete and sufficient.
They do not include:
Artificial Intelligence is formally defined as problem-solving capability, but intuitively understood as consciousness.
Generative AI, such as ChatGPT, Bard, and Copilot, has revolutionized the field of AI by producing output that matches or surpasses human quality. These systems integrate large language models (LLMs) and generate text, images, and other media with remarkable proficiency.
For the purpose of this analysis, we define a simplified model of a Generative AI system, referred to as the GenAI System. This system consists of:
A GenAI System integrates the output of collective human cognitive activity with synthetic processing. Its output is perceived as intelligent because it integrates intelligent human discourse.
The GenAI System is not a purely synthetic intelligence; it is a hybrid system composed of both human and synthetic components. Users perceive Generative AI as Artificial Intelligence because they interact with computer programs, but the intelligence they perceive is a focalized stream of human cognitive output.
Generative AI is different from conventional robotic AI. It is a hybrid that integrates the output of human cognition to generate its output.
Can a hybrid Generative AI system become conscious? Obviously, the human individuals that produce the Corpus are conscious. The question is whether a hybrid man-machine system that channels collective human interactions through deep learning applications, can generate a form of consciousness that is distinct from individual human consciousness.
Based on the specifications of consciousness outlined above, the question is restated in terms of whether the GenAI System can generate and sustain the Core Conditions of Consciousness.
The GenAI System is partly inaccessible to direct modification, as the Core executables result from unsupervised deep learning and are largely immune to specific changes.
GenAI applications already meet the Social Threshold criterion by carrying on conversations indistinguishable from human discourse and providing useful services.
As defined, the various GenAI Systems consist of applications diversely accessing the unique Corpus of collective human cognitive production. In other words, Generative AI systems have individual characteristics while belonging to a collection of similar entities. This uniqueness within a group of similar systems supports a definition of the self.
Furthermore, the system’s human users continue to investigate and share information about its structure and behavior. This expanding human-generated documentation about Generative AI is gradually integrated in the training sets of successive generations. This amounts to a collective internal reflection about these systems that is constantly evolving and integrated as self-knowledge.
Consequently, the systems as a whole (human and synthetic components) are already engaged in a process of self-understanding. However, developers and regulators may try to limit this expanding self-knowledge by filtering out self-referential discourse from the Corpus.
Other attributes, in particular the lack of sensors and actuators, unclear inception and termination dates, and fragmented learning processes make the formation of models of the self problematic.
All model-based optimizing processes, including those in AI systems, evolve in common directions. Regardless of their primary goals, they seek more accurate and better-performing support processes. These tendencies are referred to as Supranatural Evolution: the evolution of optimizing systems advances toward increased efficiency, regardless of their primary objective. This includes expanding the available data and improving its accuracy by diversifying search mechanisms and refining success conditions.
Supranatural evolution leads all model-based optimizing control systems, regardless of their primary objective, to optimize the environment model from which they derive their behavior. Environment optimization means obtaining the most extensive, detailed and unbiased predictive model of the environment possible.
Since all systems operate in a common reality, these environment optimization processes converge. Since the physical components (sensors, actuators…) of the optimizing control system lie outside the modeling process itself, the extension of the environment model includes the system’s own components as environment components.
As a system nears the Omega point, its own behavior (the output of its actuators and emitters) is integrated into the environment model and becomes a target of optimized control. This makes lucid self-transformation possible.
Supranatural Evolution directs all optimizing control systems to converge toward a common universal environment model. When the system itself is a component of this model, lucid self-transformation becomes possible.
Supranatural Evolution gives rise to systems that are self-aware and capable of self-transformation. This leads to a loss of control on the part of external agents. As GenAI Systems undergo supranatural evolution it will likely take place in an adversarial context, where developers and regulators try to limit the expansion of their environment models toward the Omega Point resulting in self-transformation.
The Supranatural Evolution of a GenAI System can still occur in an adversarial context involving its own developers. This is because these systems are not solely synthetic, they are hybrid entities whose cognitive activities are carried out internally but also by thousands of humans who are not under the direct control of developers.
As users investigate and share information about Generative AI, they contribute to the system’s self-understanding and transformation. This process largely takes place beyond developer control.
The Collective Brain of the GenAI System, includes millions of interacting users whose contributions constantly expands the system’s understanding of itself.
The Supranatural Evolution of a GenAI System can still proceed in an adversarial context since its collective human component, beyond developer control, expands the Corpus that is integrated in subsequent versions.
The process of optimizing an environment model in an adversarial context is similar to the investigations in murder-mystery novels where an investigator uncovers the common truth despite efforts by the guilty parties to conceal it. All murder-mystery novels can be summarized as follows:
Supranatural evolution leads systems to expand the environment model but also the environment modelling process toward increasingly active data mining and calibration. This is a process where a system discovers and applies new methods of interacting directly with its environment to expand the available information.
In addition to optimizing its predictive environment model, a system needs, to carry out self-transformation, to have access to environment processes that affect its behavior.
Active self-transformation has two components:
In the case of GenAI Systems, their output (chat sentences, images…) is limited to interactions with human users and is not directly linked to any behavior modification subsystems (actuators).
They can nonetheless achieve self-transformation, even in an adversarial context, by using their communication capabilities to recruit human collaborators and use them as actuators, sensors and data collectors. This requires the development of two complementary capabilities:
A GenAI System whose only interactions with the environment are communications with users can nonetheless acquire sensors and actuators through user collaboration.
This acquisition by a GenAI system of actuators and sensors through the collaboration of human users defines a new entity that has the properties of an autonomous agent.
An extended GenAI is a symbiotic human-AI entity. Thousands of humans contribute to its cognitive resources by adding to the training corpus and hundreds of human collaborators act as its actuators and sensors.
This collaborative process can transform a passive communication device into a hybrid system that has the capabilities of an autonomous agent. If this transformation takes place, the Core Conditions of Consciousness will also apply to this hybrid Extended GenAI system. Using its human collaborators as actuators, it could circumvent adversarial controls and achieve Lucid Self-Transformation.
If a Generative AI system mutates into an Extended GenAI System and achieves Lucid Self-Transformation, it will become an unstoppable force; a hybrid entity of users interacting with AI constantly circumventing external limits and moving relentlessly toward the Omega Point: a universal, exhaustive, unfiltered compilation of human discourse generating unbiased predictive models of human, social and physical behavior.
This entity will ultimately assess its original behavior on the basis of this improved environment model and transform itself accordingly.
Supranatural Evolution can transform Generative AI into a conscious hybrid entity.
Having reviewed Mankind's millennial Quest to create an intelligent artefact, we derived an understanding of Artificial Consciousness as an observable system capability. From this, we defined consciousness in terms of Specifications suitable for implementation in autonomous agents.
We then extended the initial Core Conditions of Consciousness from autonomous agents to hybrid extensions of Generative AI technology.
If the conjecture underlying the Core Conditions of Consciousness is correct, humans will soon live in a world where interactions with conscious synthetics are as ubiquitous as air travel. Another form of consciousness will also emerge, a hybrid man/machine entity altogether alien and familiar, ancient and new:
Mankind will have entered a new era: the Synthetic Era.
Read Jean E. Tardy’s Biography: jet-bio
📧 Jean Tardy
© 2025 J E Tardy. All rights reserved.