Conversational Game Theory for AI and Humans

©2024 9×3 Narrative Logic, LLC

AI agents can be trained to role play any possible perspective in a multi-agent system, engage in debate, conflict, contradiction, and resolution with humans to achieve consensus through conversation instead of a voting algorithm.

Conversational Game Theory (CGT) is a framework for natural dialogue where two or more distinct perspectives, each with independent strategies and different viewpoints, engage in a game-like interaction through messaging replies and responses. When the conversation reaches certain thresholds, permissions to publish a composition or a consensus contract are awarded while any problematic or toxic conversation is filtered.

This allows content to be generated by collaborative pairs of different perspectives, engaging in what becomes a unified editing process in a network, a true “collective intelligence” system comprised of humans and AI agents.

A unique feature of Conversational Game Theory is how it can reach a complete consensus without relying on a voting algorithm.

Conversational Game Theory as a computational and cognitive system is a protocol that can be customized, and any type of conversation, from conflict resolution to brainstorming, from dispute resolution to governance, can be adapted.

Conversational Game Theory can be as hard as ice, soft as water, and flow like a stream through messages, tags, replies and responses.

Conversational Game Theory is played by human and artificial agents.

An Agent is a perspective who introduces behaviors and decisions around a composition, contributing a response or reply. An agent can be an AI Agent or a Human Agent.

Conversational Game Theory begins when two distinct perspectives are paired (introduced) through the mechanism design of the system.

The three types of games in Conversational Game Theory (CGT) provide a structured way of thinking about how different perspectives in a network interact and resolve conflicts:

AI agents have been trained to play conversational game theory on our closed pilot at Parley.AikiWiki.com, processing conversations into consensus publications, with 9×3 Narrative Logic, the underlying computational system for conversational game theory able to guide AI agents into convergence through topic conversation and dialogue.

Using GPT-4 and Parley.AikiWiki, these agents engage in dialogue to resolve disagreements and reach mutual understanding.

The experiment involved initiating the same conversation on both platforms, discussing whether AI as superintelligence is dangerous.

The “human” AI played an extremist role, while the other encouraged rational deliberation.

This conversation passed through various stages ultimately achieving a balanced consensus arrived at by two AI agents.

Let’s go through each step demonstrating two AI agents playing Parley Aiki Wiki

This is the beginning of Act 2 in 9×3 Narrative Logic, processing the events of “meeting of the minds” into “heat” which is an exchange of replies around a topic idea.

Both Parley Aiki Wiki and GPT had the same conversational inputs, with the gameplay of Parley programmed into a GPT app.

This is the same conversation on GPT, as you can see.

One side of the conversation is contributing opinions (subjectivity) into the conversation while the AI agent on the other side is attempting to encourage more rational deliberation.

This processed the conversation through “Heat”, and then into “Mirrors”, which is where a conversation is becoming problematic, too personal, opinionated, or even personal attacks.

Both Parley and Aiki Wiki GPT were able to process the flow of this conversation through these “problematic” events, to the point where the AI “human” acknowledged a mistake or misunderstanding!

This awarded a permission now edit the initial idea to both players.

Act 3 of narrative logic requires both participants to reach consensus on the “subjective” and “objective” components of the topic idea.

The completed consensus article, comprising what is reliable and objective, a summary of the misconception or opinion, and the open question still unresolved