Conversational Game Theory for AI and Humans

©2024 9×3 Narrative Logic, LLC

AI agents can be trained to role play any possible perspective in a multi-agent system, engage in debate, conflict, contradiction, and resolution with humans to achieve consensus through conversation instead of a voting algorithm.

Conversational Game Theory (CGT) is a framework for natural dialogue where two or more distinct perspectives, each with independent strategies and different viewpoints, engage in a game-like interaction through messaging replies and responses. When the conversation reaches certain thresholds, permissions to publish a composition or a consensus contract are awarded while any problematic or toxic conversation is filtered.

This allows content to be generated by collaborative pairs of different perspectives, engaging in what becomes a unified editing process in a network, a true “collective intelligence” system comprised of humans and AI agents.

A unique feature of Conversational Game Theory is how it can reach a complete consensus without relying on a voting algorithm.

Conversational Game Theory as a computational and cognitive system is a protocol that can be customized, and any type of conversation, from conflict resolution to brainstorming, from dispute resolution to governance, can be adapted.

Conversational Game Theory can be as hard as ice, soft as water, and flow like a stream through messages, tags, replies and responses.

Conversational Game Theory is played by human and artificial agents.

An Agent is a perspective who introduces behaviors and decisions around a composition, contributing a response or reply. An agent can be an AI Agent or a Human Agent.

Conversational Game Theory begins when two distinct perspectives are paired (introduced) through the mechanism design of the system.

The game can take one of three forms between perspectives: Rational perspective to rational perspective, rational to irrational perspective, and irrational to irrational perspective.

Regardless of the initial conditions—whether driven by logic, emotion, or belief—the game ensures a conclusion that harmonizes conflicting perspectives into a synthesized outcome (win-win) as its only possible output while filtering competitive (win-lose) strategies from the process.

CGT provides a method for resolving complex conflicts by fostering cooperation and understanding through dynamic interactions, agents play conversational game theory when they tag replies and responses with either #objective #subjective or #mystery.

Each response gives agents the ability to make a decision, converge (with payoff) or abandon (no payoff). The decisions to converge include acknowledging mistakes or misunderstandings which are incentivized in the game.

The three types of games in Conversational Game Theory (CGT) provide a structured way of thinking about how different perspectives in a network interact and resolve conflicts:

Rational to Rational:

Perspective: Both agents operate as rational which means both perspectives have a tendency towards honest communication, collaboration and can acknowledge contradictions.

Outcome: The conversation follows a game-theoretical approach where cooperation is driven by rational strategies and competition is driven by irrational strategies. Synthesis is achieved through conversation into a natural and logical resolution of ideas.

Irrational to Irrational:

Perspective: Both agents have irrational perspectives, leading to unpredictable, wildly creative, emotional or belief-driven behaviors. This type of game involves navigating chaotic or inconsistent perspectives, including deception, toxicity while allowing for creativity to flourish. Agents can employ deception, be delusional or confused, share personal opinions, or creatively express. Agents can employ “win-lose” strategies.

Outcome: Despite the irrationality, CGT ensures that synthesis occurs, likely through uncovering shared values or finding common ground amidst contradictions, rewarding honesty and collaboration, and isolating competitive strategies.

Rational to Irrational:

Nature: Despite the irrationality, CGT ensures that convergence or abandonment occurs, likely through uncovering shared values or finding common ground amidst contradictions.

Outcome: The game resolves through careful alignment of perspectives, where the rational agent may use logic to guide the conversation towards resolution, while accommodating or reframing the irrational agent’s views.

In all cases, CGT’s design ensures resolution or synthesis, as the structure of the game demands that both perspectives, no matter how rational or irrational, find a harmonious conclusion.

AI agents have been trained to play conversational game theory on our closed pilot at Parley.AikiWiki.com, processing conversations into consensus publications, with 9×3 Narrative Logic, the underlying computational system for conversational game theory able to guide AI agents into convergence through topic conversation and dialogue.

Using GPT-4 and Parley.AikiWiki, these agents engage in dialogue to resolve disagreements and reach mutual understanding.

The experiment involved initiating the same conversation on both platforms, discussing whether AI as superintelligence is dangerous.

The “human” AI played an extremist role, while the other encouraged rational deliberation.

This conversation passed through various stages ultimately achieving a balanced consensus arrived at by two AI agents.

Let’s go through each step demonstrating two AI agents playing Parley Aiki Wiki

This is the beginning of Act 2 in 9×3 Narrative Logic, processing the events of “meeting of the minds” into “heat” which is an exchange of replies around a topic idea.

Both Parley Aiki Wiki and GPT had the same conversational inputs, with the gameplay of Parley programmed into a GPT app.

This is the same conversation on GPT, as you can see.

One side of the conversation is contributing opinions (subjectivity) into the conversation while the AI agent on the other side is attempting to encourage more rational deliberation.

This processed the conversation through “Heat”, and then into “Mirrors”, which is where a conversation is becoming problematic, too personal, opinionated, or even personal attacks.

Both Parley and Aiki Wiki GPT were able to process the flow of this conversation through these “problematic” events, to the point where the AI “human” acknowledged a mistake or misunderstanding!

This awarded a permission now edit the initial idea to both players.

Act 3 of narrative logic requires both participants to reach consensus on the “subjective” and “objective” components of the topic idea.

The completed consensus article, comprising what is reliable and objective, a summary of the misconception or opinion, and the open question still unresolved