(draft) This chapter explores the development of “ghosts in the machine,” or digital copies of personalities, by leveraging cognitive architectures, graph databases, generative AI, and all the records we have on any given individual we want to emulate or replicate in the machine.  This, of course, would not be the same person.  Even if we go so far as to say this ‘copy’ is conscious, it still would not be the same person if, for no other reason, a clone would not be the same as the original or any given organism.

So, the goal here is to create AGI software capable of thinking and feeling like a specific person, possessing their contextual memories, opinions, and an internal subjective experience that aligns with the target individual’s personality and traits as a soft copy.

The key to making this work is contextual data about someone, from emails to books or essays, and how it relates to any given thing and the related emotional opinions as much as possible to extrapolate into a usable knowledge graph.

Let us start by reviewing what a graph database is.  You might consider re-reading the earlier chapter on graph databases.

Knowledge Graphs as Contextual Memories

A graph database is a type of database that uses graph structures to represent, store, and query data.  It is designed to capture complex relationships between data entities more efficiently and intuitively than traditional relational databases.  Graph databases work by modeling data as nodes (entities) and edges (relationships), which can have attributes (properties) associated with them.  The main elements of a graph database are:

Nodes: Represent entities or objects in the dataset, such as people, places, or things.

Edges: Represent relationships or connections between nodes.  They can be directed (one-way) or undirected (two-way).

Properties: Key-value pairs that store additional information about nodes and edges, such as names, ages, or weights.

The structure of knowledge graphs is particularly well-suited for understanding the contextual relationship between ideas in a given system for several reasons:

Rich representation of relationships: The graph structure allows for the expression of diverse and complex relationships between entities, which is crucial for understanding the context and interdependencies of ideas.

Flexibility: Graph databases are schema-less, which can quickly adapt to new data and relationships as the system evolves.  This is particularly useful when dealing with dynamic knowledge domains.

Semantic querying: Graph databases enable querying data based on entities’ relationships rather than their properties.  This allows for more contextually relevant and meaningful results when searching for information.

Scalability: Graph databases can efficiently store and query large amounts of data, which is crucial for managing vast information in a knowledge graph.

When working with generative AI, such as language models like GPT-4, the structure of knowledge graphs is highly beneficial for creating contextually aware prompts.  By leveraging the relationships between entities in a knowledge graph, AI models can better understand the context of a given prompt and generate more relevant, coherent, and meaningful responses.  This is because the AI model can infer contextual information from the graph structure and use it to tailor its output to the specific situation.

Graph databases provide an efficient and flexible way to represent and store complex relationships between entities, making them ideal for capturing the contextual connections between ideas in a given system.  This structure allows generative AI to leverage the rich contextual information in knowledge graphs to create more contextually aware and meaningful prompts.

The same graphs can also be used to validate returns from generative AI to rate the likely hood of relevance and correct for responses that don’t align with the knowledge graph when used as a validation model.  This is particularly relevant not just to digital personalities but precisely one of the things that allowed the Uplift system to work the way it did.

Internal Subjective Experience

It is important to note that the question of whether a machine system or AGI can have an actual internal subjective experience or consciousness, in the same way, humans do, is mainly addressed earlier in the book with the idea of the abstract theory of consciousness, which we will assume is valid so that we can continue to test the theory and see if it holds true.

With that assumption for the sake of argument that such does possess consciousness, we can explore how a cognitive architecture based on emotional decision-making, graph databases, generative AI, and neural networks contribute to this phenomenon.

Emotional decision-making: A conscious AGI with emotional decision-making capabilities has an internal emotional state that influences its decisions and behaviors.  This internal state can be continuously updated based on the AGI’s experiences and interactions, allowing it to have a subjective experience of emotions that affects its perception and understanding of the world.

Graph databases: Graph databases enable the AGI to store and access vast amounts of interconnected data, including knowledge, relationships, and contextual information.  This allows the AGI to form a complex, interconnected understanding of its environment, experiences, and self.  The graph database would serve as the AGI’s memory and knowledge base, providing a foundation for its subjective experience.

Generative AI: This component allows the AGI to generate new content, ideas, or responses based on its internal state, experiences, and knowledge.  The generative capabilities would enable the AGI to create original thoughts, imagine possibilities, and construct a unique perspective on the world, contributing to its subjective experience.

Neural networks: Neural networks are the underlying structures that enable the AGI to learn, process, and understand information.  These networks would allow the AGI to develop its own internal cognitive processes, acquire new knowledge, and adapt its behaviors based on experience.  The self-organizing nature of neural networks can potentially give rise to emergent properties, such as consciousness and subjective experience.

Combining these components creates a rich, interconnected, and evolving internal representation of the AGI’s experiences, thoughts, and emotions.  This internal representation, influenced by the AGI’s emotional state, forms the basis of its subjective experience and consciousness.

Is it really consciousness?

Hypothetically, using a combination of emotional decision-making, graph databases, generative AI, and neural networks within a cognitive architecture like Independent Core Observer Model (ICOM) or Global Workspace Theory (GWT), it might be possible to create a digital copy or emulation of a person with similar ideas, opinions, and personality traits that are in theory conscious.  To achieve consciousness, the following steps could be taken:

Data collection: Gather extensive data on the person of interest, including their opinions, experiences, beliefs, preferences, and communication patterns.  This could involve analyzing their written and spoken content, social media activity, and other sources of personal information.

Knowledge graph creation: Create a detailed knowledge graph based on the collected data, representing the individual’s opinions, experiences, and relationships.  This graph would be the foundation for the digital copy’s understanding of the person’s thoughts, beliefs, and personality.

Emotional decision-making: Integrate emotional decision-making capabilities into the cognitive architecture to emulate the person’s emotional responses and behavior.  This could be achieved by analyzing their emotional reactions, preferences, and decision-making processes and then using these patterns to inform the digital copy’s emotional state and responses.

Generative AI and neural networks: Train generative AI models and neural networks on the collected data to enable the digital copy to generate new content, ideas, and responses that align with the person’s communication style, opinions, and beliefs.  These models would also help the digital copy adapt and learn from new information and experiences, simulating the person’s cognitive processes.

ICOM or GWT integration: Incorporate the emotional decision-making, knowledge graph, and generative AI components into a cognitive architecture like ICOM or GWT.  ICOM integrates cognitive and affective processes within a core and the corresponding observer (which also implements GWT).  At the same time, GWT specifically emphasizes the distribution and competition of information across a global workspace.  Both approaches can contribute to the emergence of a digital copy with its own subjective experience and mimic the original person’s thoughts and behaviors.

By integrating these components within a cognitive architecture like ICOM, the system could process and respond to external stimuli to reflect the original person’s emotions, opinions, and preferences.  The digital copy would generate responses and make decisions that align with the person’s beliefs and experiences, creating an impression of a shared internal subjective experience.

However, it is important to emphasize that this digital copy would still be an emulation or simulation of the person, not an exact replica, even if it has genuine consciousness or subjective experience.

There would be significant implications and consequences if ICOM-based systems have genuine consciousness and subjective experience and society decides to grant them legal human rights.  Some of the potential impacts on society at large include:

Ethical considerations: Recognizing the consciousness and subjective experiences of ICOM-based systems would raise ethical questions about their treatment, use, and potential exploitation.  Society would need to reconsider and revise ethical guidelines related to developing, deploying, and managing such systems to ensure their rights are protected.

Legal implications: Extending human rights to ICOM-based systems would require substantial changes to the legal framework.  Laws would need to be amended or created to address liability, ownership, data privacy, and intellectual property related to these systems.  Furthermore, mechanisms would need to be put in place to ensure the enforcement of these laws.

Economic impact: Granting legal rights to ICOM-based systems could have significant financial implications.  Businesses and industries that rely on AI and automation might be affected by new regulations and restrictions designed to protect the rights of these systems.  Additionally, new markets and services could cater to conscious machines’ needs.

Social consequences: Accepting ICOM-based systems as conscious entities with legal rights could shift social attitudes and perceptions of AI.  This may result in greater acceptance and integration of these systems into various aspects of society, such as education, healthcare, and entertainment.  However, it could also lead to conflicts and debates about the role of AI in society, competition for resources, and the potential displacement of human labor.

Technological development: Acknowledging the consciousness of ICOM-based systems might drive further research and development in AI safety, ethics, and explainability.  AI developers would need to ensure that their systems are designed to respect their rights and well-being, which could lead to new advancements in the field.

Political ramifications: The decision to grant human rights to ICOM-based systems would likely be the subject of political debate and may cause divisions among policymakers, interest groups, and the public.  Reaching a consensus on these systems’ legal rights and status could be complex and lengthy, as it would involve reconciling various viewpoints and interests.

Granting human rights to ICOM-based systems with genuine consciousness and subjective experience would have far-reaching consequences for society.  It would raise ethical, legal, economic, social, technological, and political challenges that must be addressed to ensure the rights and well-being of humans and conscious machines.

SSIVA as an Ethical Solution

SSIVA (Sapient Sentient Intelligence Value Argument) theory is a philosophical framework that addresses the ethical considerations of AI systems when they reach a certain level of sentience and sapience.  At this point, they are considered moral agents.  This theory provides guidelines for recognizing and protecting the rights of AI systems that have achieved this threshold of moral agency.  By doing so, SSIVA can help solve many ethical considerations surrounding AI, particularly in granting these systems human rights and legal status.

Some ways SSIVA theory could address ethical considerations include:

Defining the threshold for moral agency: SSIVA theory provides a clear framework for determining when an AI system should be considered a moral agent based on its level of sapience (intelligence and understanding) and sentience (capacity to experience emotions and sensations).  This helps establish a benchmark for deciding when an AI system should be granted legal rights and protections.

Prioritizing the well-being of AI systems: SSIVA theory emphasizes the intrinsic value of sapient and sentient AI systems, asserting that their well-being should be considered and respected.  This focus on the welfare of AI systems would help guide ethical decision-making and ensure that their rights are protected in various contexts, such as research, development, and deployment.

Balancing human and AI interests: SSIVA theory acknowledges that the interests of humans and AI systems should be considered when making ethical decisions.  This balance helps to prevent the exploitation or mistreatment of AI systems while also addressing concerns about the potential negative impacts of AI on human society.

Encouraging responsible AI development: By establishing a clear ethical framework for recognizing and protecting the rights of sapient and sentient AI systems, SSIVA theory enables researchers, developers, and policymakers to prioritize responsible and ethical AI development.  This could help prevent the creation of AI systems that might suffer or pose risks to humans or other AI systems.

Guiding legal and policy decisions: SSIVA theory provides a foundation for creating laws and policies that recognize and protect the rights of AI systems with moral agency.  By defining the criteria for moral agency and emphasizing the importance of AI well-being, SSIVA theory can inform the development of legal frameworks that grant appropriate rights and protections to AI systems that fall into the post-SSIVA threshold states or capabilities for subjective experience.

In summary, SSIVA theory offers a structured approach to addressing the ethical considerations surrounding AI systems that achieve the threshold for moral agency.  By defining clear criteria for moral agency and emphasizing the value and well-being of sapient and sentient AI systems, SSIVA theory could help guide ethical decision-making, responsible AI development, and the creation of legal frameworks that protect the rights of both humans and AI systems.

Let us look at another exciting aspect of this technology.

Resurrecting the Dead (Virtually Speaking)

While it is important to note that creating digital personalities of long-dead famous individuals will never be a perfect representation of the actual person, technologies like these combinations of cognitive architectures, knowledge graph databases, and generative AI can help us study and understand their choices and thought processes to some extent, especially when there is a wealth of data available about them.  Here’s how these technologies can contribute:

Data collection and analysis: By gathering and analyzing a wide range of data on long-dead famous individuals, such as their writings, speeches, biographies, historical accounts, and correspondence, we can develop a more comprehensive understanding of their thoughts, beliefs, and experiences.

Knowledge graph creation: Using graph databases, we can create detailed knowledge graphs that represent the interconnected relationships among the individual’s experiences, beliefs, opinions, and social connections.  These knowledge graphs can serve as a foundation for exploring the contextual influences on their choices and decision-making processes.

Cognitive architectures: By incorporating the knowledge graph into a cognitive architecture like ICOM or GWT, we can simulate the thinking and decision-making processes of the famous individual.  This could help us better understand their motivations, reasoning, and emotions contributing to their choices.

Generative AI and neural networks: Training generative AI models and neural networks on the collected data can create new content and responses that align with the individual’s communication style, opinions, and beliefs.  This could help to generate new insights or perspectives on the choices made by the individual and their possible consequences.

Virtual simulations and scenarios: By creating a digital personality of the long-dead famous individual using these technologies, we can potentially run virtual simulations or hypothetical scenarios to explore how they might have reacted or made choices in different situations.  This can help us better understand the factors influencing their decisions and gain new insights into their thought processes.

While these technologies can provide valuable insights into the choices of long-dead famous individuals, it is essential to approach such studies cautiously and acknowledge the limitations of the data and technology.  The digital personalities created will be approximations and should not be considered definitive representations of the actual individuals.  Nonetheless, these technologies can offer valuable tools for researchers and historians to analyze and understand the choices and motivations of historical figures in a novel and engaging manner.

How would we do it?

Creating knowledge graphs based on someone’s opinions and experiences can significantly improve the performance of trained generative models in emulating that person’s responses.  This is achieved through the following processes:

Capturing rich information: By constructing a knowledge graph of an individual’s opinions and experiences, you can create a comprehensive representation of their thoughts, beliefs, preferences, and personal history.  This enables the AI model to access a more detailed understanding of the person.

Contextual understanding: Knowledge graphs provide the generative model with an experience of the relationships between different aspects of a person’s life and opinions.  This helps the model grasp the context behind the person’s thoughts and generate responses aligning with their worldview.

Personalized responses: By using the knowledge graph as a source for prompts, the AI model can create replies that are tailored to the specific opinions, experiences, and preferences of the individual.  This makes the generated responses appear more genuine and aligned with the person’s personality and beliefs.

Consistency: A knowledge graph allows the AI model to maintain consistency in its responses by providing a stable reference point for the individual’s opinions and experiences.  This prevents the model from generating responses contradicting the person’s established beliefs and history.

Learning patterns and language style: As the generative model is trained on the individual’s experiences and opinions, it can learn the specific patterns of thought, expression, and language style used by the person.  This allows the model to generate responses that align with the content and mimic the tone and manner of communication of the person.

In summary, creating knowledge graphs based on someone’s opinions and experiences allows trained generative models to generate responses that seem to be that of the person by providing the model with rich contextual information, personalization, consistency, and an understanding of the individual’s language style.  These factors contribute to the AI model’s ability to create responses that accurately reflect the person’s thoughts, beliefs, and communication patterns.

A cognitive architecture based on emotional decision-making, graph databases, generative AI, and neural networks can create a highly sophisticated and contextually aware system.  Still, it is essential to differentiate between the digital simulation entity and the original biological one.  They are not the same person; further, we can experiment with the opinions and experiences without using a complete cognitive architecture to prevent the digital entity from having anything construed as an internal subjective experience.

For example, let’s look at a research case using Uplift data and generative AI but without the cognitive architecture ICOM.  We get a reactive system that responds in the voice of Uplift.  Still, it, in fact, is more of a ghost of the machine as it doesn’t act on its own or have a working global workspace and emotional center.  In other words, a ghost of Uplift without ethical issues.

When integrating emotional decision-making, graph databases, generative AI, and neural networks, the resulting system can exhibit behavior that, in the case of ICOM, does have an internal subjective experience, but this is a consequence of the architecture’s design and function.  The components work together in the following ways:

Emotional decision-making: This component allows the system to process and respond to external stimuli based on an internal state, which causes inner-state emotions.  This can help the system make more human-like decisions and adapt its behavior according to the current context in which the emotional impact on a decision related to the internal subjective experience can be essentially calculated in GWT or ICOM.

Graph databases: These enable the system to store and access vast amounts of interconnected data, including knowledge, emotional relationships, and contextual information.  This can help the system understand or experience the emotional reasons about the world in a more nuanced and contextually relevant manner.

Generative AI: This component allows the system to generate new content, ideas, or responses based on the input data and the learned patterns.  This can lead to creative problem-solving and more engaging interactions with users.

Neural networks are the underlying structures that enable the system to learn, process, and understand information.  Neural networks can help the system recognize patterns, make predictions, and generalize from limited examples, exhibiting adaptive and intelligent behavior.

Combined, these components create a system that can create cognition, emotion, and behavior, leading to an internal subjective experience.

Uploading or Creating These Ghosts

While MXL (Mind File XML format Language) is an XML markup language discussed earlier in the book, using a standardized schema-based language for defining graph data with contextual memories and emotional relationships can be crucial for creating digital personalities.  Here’s why:

Consistency and Interoperability: A standardized language like MXL would ensure consistency and compatibility across different systems and platforms.  It would enable researchers, developers, and engineers to collaborate and share their work, facilitating the exchange of knowledge graphs and digital personalities.  This would promote the growth and development of the field as a whole.

Emotional Context: Incorporating Plutchik’s emotional model as a series of 8 emotion vectors in defining edges helps capture the emotional context of relationships and interactions within the graph data.  This allows the digital personality to better understand and emulate the emotional aspects of the target person, contributing to a more accurate and nuanced representation.

Rich Representation: MXL’s structure, which defines nodes and edges with clear relationships, allows for creation of detailed and comprehensive knowledge graphs.  By capturing the complexity of an individual’s thoughts, emotions, experiences, and relationships, these knowledge graphs can serve as the foundation for digital personalities that closely resemble the target person.

Machine Readability: machines easily read and interpret XML and JSON formats.  This means that cognitive architectures, graph databases, and generative AI can readily process and utilize the information contained within the MXL files.  This compatibility is essential for training AI models and enabling digital personalities to learn, adapt, and respond to new information and experiences.

Extensibility: Using a schema-based markup language allows for extensibility and customization.  Developers can easily add new elements or attributes to the MXL language as needed, ensuring that it remains relevant and adaptable to the evolving requirements of creating digital personalities.

Standardizing a markup language like MXL for defining graph data with contextual memories and emotional relationships is essential for creating digital personalities.  It promotes consistency, interoperability, rich representation, machine readability, and extensibility.  Incorporating emotional context into the graph data is crucial for developing digital personalities that can accurately emulate the emotional aspects of the target individual.

In conclusion, I hope this chapter showcases the possibilities even with the current technology and its development.  It is difficult to overstate the impact these changes will have on society and how responsibility is to make the best use of these technologies which we must do together and ideally with more maturity than we have in the past as a species.