17 KiB
Debate Bots
A terminal application that enables two LLMs to engage in structured debates on any topic with intelligent memory management, automatic saving, and comprehensive logging.
Features
Core Debate Features
- Two LLM Agents: Configure two separate LLM agents with independent memory and system prompts
- Auto-Named Agents: Agents automatically named after their models (e.g., "Claude-3-Haiku" vs "Llama-3.1-8B-Instruct")
- Multiple Providers: Support for OpenRouter (cloud-based) and LMStudio (local)
- Structured Debates: Automatic position assignment (for/against) via coin flip
- Full Context Awareness: Agents see complete debate history, not just the last response
- Interactive Control: Pause after configurable rounds for user input
- Beautiful UI: Rich terminal interface with side-by-side display, color-coded positions, and formatted output
Advanced Features
- Streaming Responses: Real-time streaming of LLM responses with live side-by-side display and tokens/second metrics
- Automatic Memory Management: Token counting and automatic memory truncation to prevent context overflow
- Auto-Save: Debates automatically saved after each round (configurable)
- Response Validation: Ensures agents provide valid, non-empty responses
- Statistics Tracking: Real-time tracking of response times, token usage, memory consumption, and streaming speeds
- Comprehensive Logging: Optional file and console logging with configurable levels
- CLI Arguments: Control all aspects via command-line flags
- Environment Variables: Secure API key management via
.envfiles - Retry Logic: Automatic retry with exponential backoff for transient failures
- Error Handling: Graceful error handling with user-friendly messages
Requirements
- Python 3.8+
- OpenRouter API key (if using OpenRouter) - Get one at openrouter.ai
- LMStudio running locally (if using LMStudio) - Download at lmstudio.ai
Quick Start Guide (For Beginners)
New to coding? Follow these simple steps to get your debate running in 5 minutes:
Step 1: Get an API Key
- Go to openrouter.ai/keys
- Sign up for a free account
- Click "Create Key" and copy your API key (it looks like
sk-or-v1-...) - Keep this somewhere safe - you'll need it in a moment!
Step 2: Download and Set Up
- Download this project (green "Code" button → "Download ZIP")
- Unzip the folder anywhere on your computer
- Open Terminal (Mac/Linux) or Command Prompt (Windows)
- Navigate to the folder:
cd path/to/debate-bots
Step 3: Install Python Dependencies
Option A - Easy Way (Mac/Linux): Just run the included setup script:
chmod +x run.sh
./run.sh
This will automatically install everything and start the app!
Option B - Manual Way (All platforms): Run this command:
pip install -r requirements.txt
This installs all the necessary software the app needs.
Note: If you used Option A, skip to Step 4 - the script will ask you for setup details!
Step 4: Configure Your API Key
The easiest way:
- Create a file called
.envin the debate-bots folder - Open it with any text editor (Notepad, TextEdit, etc.)
- Add this line, replacing with your actual key:
OPENROUTER_API_KEY=sk-or-v1-your-key-here - Save the file
Step 5: Create Your Configuration
- Copy the file
config.example.yaml - Rename the copy to
config.yaml - Open
config.yamlin a text editor - You'll see two agents - you can leave them as-is or change the models/prompts
- Save the file (no need to add your API key here - it's already in
.env!)
Step 6: Start Your First Debate!
Mac/Linux users:
./run.sh
Windows users (or if you prefer):
python -m src.main
The app will ask you:
- What topic to debate? (e.g., "Pineapple belongs on pizza")
- After each round, you can continue, give instructions, or quit
That's it! You'll see the two AI agents debate in real-time. 🎉
Common Issues
- "Command not found": Make sure Python is installed. Try
python3instead ofpython, or use./run.shon Mac/Linux - "No module named...": Run
pip install -r requirements.txtagain, or just use./run.shwhich handles this automatically - "Permission denied" for run.sh: Run
chmod +x run.shfirst to make it executable - "API key invalid": Double-check you copied the full key from OpenRouter into
.env - Nothing streams: That's okay! The debate still works, just disable streaming with
--no-streaming
Detailed Installation (For Advanced Users)
- Clone or download this repository:
cd debate-bots
- Install dependencies:
pip install -r requirements.txt
- Configure your agents (see Configuration section below)
Configuration
Option 1: Environment Variables (Recommended for Security)
- Copy the example environment file:
cp .env.example .env
- Edit
.envwith your API keys:
# .env file
OPENROUTER_API_KEY=your_openrouter_api_key_here
LOG_LEVEL=INFO
LOG_FILE=debates.log # Optional: enable file logging
- Create a minimal
config.yamlwithout API keys:
agent1:
provider: openrouter
model: anthropic/claude-3-haiku
system_prompt: You are a logical and evidence-based debater.
agent2:
provider: openrouter
model: meta-llama/llama-3.1-8b-instruct
system_prompt: You are a persuasive and rhetorical debater.
The application will automatically use API keys from environment variables, keeping them secure.
Option 2: Config File Only
Copy the example configuration:
cp config.example.yaml config.yaml
Edit config.yaml with your settings:
agent1:
provider: openrouter
model: anthropic/claude-3-haiku
system_prompt: You are a logical and evidence-based debater.
api_key: your-api-key-here # Not recommended - use .env instead
agent2:
provider: openrouter
model: meta-llama/llama-3.1-8b-instruct
system_prompt: You are a persuasive and rhetorical debater.
api_key: your-api-key-here
Note: The application will warn you about storing API keys in config files and suggest using environment variables instead.
Option 3: Interactive Setup
Simply run the application without any configuration, and it will prompt you for all necessary information:
python -m src.main
Usage
Basic Usage
Run the application:
python -m src.main
The application will:
- Load or prompt for configuration
- Ask for a debate topic
- Randomly assign positions (for/against) to each agent
- Run exchanges between the agents
- Pause after each round and show statistics
- Automatically save the debate (unless disabled)
Command-Line Options
The application supports extensive CLI arguments:
python -m src.main [OPTIONS]
Options:
--config, -c PATH- Path to configuration file (default: config.yaml)--topic, -t TEXT- Debate topic (skips interactive prompt)--exchanges, -e NUMBER- Exchanges per round (default: 10)--no-auto-save- Disable automatic saving after each round--no-streaming- Disable streaming responses (show complete responses at once instead of real-time streaming)--log-level LEVEL- Logging level: DEBUG, INFO, WARNING, ERROR, CRITICAL--log-file PATH- Log to file (default: console only)--max-memory-tokens NUMBER- Maximum tokens to keep in agent memory
Examples:
# Basic usage with defaults
python -m src.main
# Specify topic and exchanges
python -m src.main --topic "AI is beneficial" --exchanges 5
# Enable debug logging to file
python -m src.main --log-level DEBUG --log-file debug.log
# Disable auto-save for manual control
python -m src.main --no-auto-save
# Disable streaming for slower connections
python -m src.main --no-streaming
# Use custom config and memory limit
python -m src.main --config my_config.yaml --max-memory-tokens 50000
# Quick debate with all options
python -m src.main -t "Climate change" -e 3 --log-level INFO
User Options (After Each Round)
After each round, the application displays:
- Brief statistics (time, exchanges, average response time)
- Interactive menu with options:
- Continue - Run another round of exchanges
- Settle - Provide your conclusion to end the debate
- Give instructions - Provide custom instructions to both agents
- Save and quit - End the debate
Statistics Display
After completing the debate, you'll see comprehensive statistics:
- Total Duration: How long the debate lasted
- Total Exchanges: Number of argument exchanges
- Response Times: Average, minimum, and maximum
- Memory Usage: Token count and percentage for each agent
Provider Configuration
OpenRouter
agent1:
provider: openrouter
model: anthropic/claude-3-haiku # or any OpenRouter model
api_key: your-openrouter-api-key
system_prompt: Your custom prompt here
Popular OpenRouter models:
anthropic/claude-3-haiku(fast and affordable)anthropic/claude-3-sonnet(balanced)meta-llama/llama-3.1-8b-instruct(open source)google/gemini-pro(Google's model)
LMStudio
agent1:
provider: lmstudio
model: your-loaded-model-name
base_url: http://localhost:1234/v1 # default LMStudio URL
system_prompt: Your custom prompt here
Before using LMStudio:
- Download and install LMStudio
- Load a model in LMStudio
- Start the local server (usually on port 1234)
- Use the model name as shown in LMStudio
Example Debate Topics
- "Artificial Intelligence will be net positive for humanity"
- "Remote work is better than office work"
- "Nuclear energy should replace fossil fuels"
- "Universal Basic Income should be implemented"
- "Space exploration is worth the investment"
Project Structure
debate-bots/
├── src/
│ ├── __init__.py
│ ├── main.py # Application entry point with CLI
│ ├── agent.py # Agent class with memory management
│ ├── debate.py # Debate orchestrator with statistics
│ ├── config.py # Configuration with env var support
│ ├── ui.py # Terminal UI with statistics display
│ ├── logger.py # Logging configuration
│ ├── constants.py # Application constants
│ ├── exceptions.py # Custom exception classes
│ ├── utils/
│ │ ├── __init__.py
│ │ └── token_counter.py # Token counting and management
│ └── providers/
│ ├── __init__.py
│ ├── base.py # Base provider with retry logic
│ ├── openrouter.py # OpenRouter with error handling
│ └── lmstudio.py # LMStudio with error handling
├── tests/ # Test suite
│ ├── __init__.py
│ ├── conftest.py # Pytest fixtures
│ ├── test_config.py # Configuration tests
│ ├── test_agent.py # Agent tests
│ └── test_token_counter.py # Token counter tests
├── debates/ # Saved debate histories (auto-created)
├── .env # Environment variables (gitignored)
├── .env.example # Example environment variables
├── .gitignore # Git ignore patterns
├── config.yaml # Your configuration (gitignored)
├── config.example.yaml # Example configuration
├── requirements.txt # Python dependencies
├── README.md # This file
└── product-brief.md # Original product specification
Saved Debates
Debates are automatically saved (unless disabled with --no-auto-save) in the debates/ directory as JSON files with comprehensive statistics:
{
"topic": "Your debate topic",
"timestamp": "2024-01-15T10:30:00",
"agents": {
"agent1": {"name": "Agent 1", "position": "for"},
"agent2": {"name": "Agent 2", "position": "against"}
},
"exchanges": [
{
"exchange": 1,
"agent": "Agent 1",
"position": "for",
"content": "Opening argument..."
}
],
"total_exchanges": 20,
"statistics": {
"total_exchanges": 20,
"elapsed_time_seconds": 245.3,
"average_response_time_seconds": 12.2,
"agent1_memory": {
"message_count": 42,
"current_tokens": 15234,
"token_usage_percentage": 15.2
},
"agent2_memory": {
"message_count": 42,
"current_tokens": 14987,
"token_usage_percentage": 15.0
}
}
}
Converting Debates to Markdown
Use the included converter script:
# Convert a single debate
python json_to_markdown.py debates/debate_topic_20240115.json
# Convert all debates
python json_to_markdown.py --all
Customization
System Prompts
System prompts define your agent's personality and debate style. Examples:
Logical debater:
You are a skilled debater who values logic, evidence, and rational argumentation.
You cite sources and use structured reasoning.
Emotional debater:
You are a persuasive speaker who uses storytelling, analogies, and emotional
appeals to make your points compelling and relatable.
Devil's advocate:
You are a contrarian thinker who finds flaws in arguments and plays devil's
advocate, always questioning assumptions.
Exchanges Per Round
Use the --exchanges CLI argument:
python -m src.main --exchanges 20
Or add to your config file:
# In config.yaml (if you modify main.py to read this setting)
exchanges_per_round: 20
Memory Management
Control agent memory limits:
# Limit memory to 50,000 tokens per agent
python -m src.main --max-memory-tokens 50000
Agents will automatically truncate old messages when approaching the limit while preserving the system message.
Testing
The project includes a comprehensive test suite:
# Install test dependencies (already in requirements.txt)
pip install pytest pytest-cov
# Run all tests
pytest
# Run with coverage report
pytest --cov=src --cov-report=html
# Run specific test file
pytest tests/test_config.py
# Run with verbose output
pytest -v
Test coverage includes:
- Configuration management and environment variables
- Agent memory management and truncation
- Token counting and limits
- Provider error handling
- Custom exceptions
Troubleshooting
"Cannot connect to OpenRouter"
- Check your API key is correct
- Verify you have credits on your OpenRouter account
- Check your internet connection
"Cannot connect to LMStudio"
- Ensure LMStudio is running
- Verify the local server is started in LMStudio
- Check the base_url in your config (default: http://localhost:1234/v1)
- Confirm a model is loaded in LMStudio
Debates are too short/long
- Use
--exchangesCLI argument to adjust exchanges per round - Use the "continue" option to extend debates
- Use custom instructions to guide the discussion
Memory/Token Issues
- Monitor memory usage in the statistics display
- Adjust
--max-memory-tokensif agents are truncating too aggressively - Check logs with
--log-level DEBUGfor detailed token information
Logging Issues
- Set
--log-level DEBUGfor detailed troubleshooting - Use
--log-fileto save logs for later analysis - Check the console for real-time error messages
License
This project is open source. Feel free to modify and distribute.
Contributing
Contributions welcome! The codebase now includes:
✅ Already Implemented:
- Comprehensive error handling and logging
- Automatic memory management with token counting
- Environment variable support for security
- CLI arguments for all options
- Auto-save functionality
- Statistics tracking and display
- Test infrastructure with pytest
- Markdown export (json_to_markdown.py)
- Retry logic with exponential backoff
- Response validation
Ideas for Future Enhancements:
- Support for more LLM providers (Anthropic direct API, OpenAI, Hugging Face)
- Web interface (FastAPI/Flask + React)
- Multi-agent debates (3+ participants)
- Judge/arbiter agent that scores arguments
- Debate templates for different formats (Oxford, Lincoln-Douglas, etc.)
- Export to PDF
- Real-time streaming responses
- Debate replay/playback functionality
- Voice synthesis for debate audio
Development Setup:
- Clone the repository
- Install dependencies:
pip install -r requirements.txt - Run tests:
pytest - Make your changes
- Add tests for new features
- Run tests again to ensure everything passes
- Submit a pull request
Support
For issues and questions, please open an issue on the GitHub repository.