555 lines
17 KiB
Markdown
555 lines
17 KiB
Markdown
# Debate Bots
|
|
|
|
A terminal application that enables two LLMs to engage in structured debates on any topic with intelligent memory management, automatic saving, and comprehensive logging.
|
|
|
|
## Features
|
|
|
|
### Core Debate Features
|
|
- **Two LLM Agents**: Configure two separate LLM agents with independent memory and system prompts
|
|
- **Auto-Named Agents**: Agents automatically named after their models (e.g., "Claude-3-Haiku" vs "Llama-3.1-8B-Instruct")
|
|
- **Multiple Providers**: Support for OpenRouter (cloud-based) and LMStudio (local)
|
|
- **Structured Debates**: Automatic position assignment (for/against) via coin flip
|
|
- **Full Context Awareness**: Agents see complete debate history, not just the last response
|
|
- **Interactive Control**: Pause after configurable rounds for user input
|
|
- **Beautiful UI**: Rich terminal interface with side-by-side display, color-coded positions, and formatted output
|
|
|
|
### Advanced Features
|
|
- **Streaming Responses**: Real-time streaming of LLM responses with live side-by-side display and tokens/second metrics
|
|
- **Automatic Memory Management**: Token counting and automatic memory truncation to prevent context overflow
|
|
- **Auto-Save**: Debates automatically saved after each round (configurable)
|
|
- **Response Validation**: Ensures agents provide valid, non-empty responses
|
|
- **Statistics Tracking**: Real-time tracking of response times, token usage, memory consumption, and streaming speeds
|
|
- **Comprehensive Logging**: Optional file and console logging with configurable levels
|
|
- **CLI Arguments**: Control all aspects via command-line flags
|
|
- **Environment Variables**: Secure API key management via `.env` files
|
|
- **Retry Logic**: Automatic retry with exponential backoff for transient failures
|
|
- **Error Handling**: Graceful error handling with user-friendly messages
|
|
|
|
## Requirements
|
|
|
|
- Python 3.8+
|
|
- OpenRouter API key (if using OpenRouter) - Get one at [openrouter.ai](https://openrouter.ai/keys)
|
|
- LMStudio running locally (if using LMStudio) - Download at [lmstudio.ai](https://lmstudio.ai/)
|
|
|
|
## Quick Start Guide (For Beginners)
|
|
|
|
**New to coding?** Follow these simple steps to get your debate running in 5 minutes:
|
|
|
|
### Step 1: Get an API Key
|
|
1. Go to [openrouter.ai/keys](https://openrouter.ai/keys)
|
|
2. Sign up for a free account
|
|
3. Click "Create Key" and copy your API key (it looks like `sk-or-v1-...`)
|
|
4. Keep this somewhere safe - you'll need it in a moment!
|
|
|
|
### Step 2: Download and Set Up
|
|
1. Download this project (green "Code" button → "Download ZIP")
|
|
2. Unzip the folder anywhere on your computer
|
|
3. Open Terminal (Mac/Linux) or Command Prompt (Windows)
|
|
4. Navigate to the folder:
|
|
```bash
|
|
cd path/to/debate-bots
|
|
```
|
|
|
|
### Step 3: Install Python Dependencies
|
|
|
|
**Option A - Easy Way (Mac/Linux):**
|
|
Just run the included setup script:
|
|
```bash
|
|
chmod +x run.sh
|
|
./run.sh
|
|
```
|
|
*This will automatically install everything and start the app!*
|
|
|
|
**Option B - Manual Way (All platforms):**
|
|
Run this command:
|
|
```bash
|
|
pip install -r requirements.txt
|
|
```
|
|
*This installs all the necessary software the app needs.*
|
|
|
|
**Note:** If you used Option A, skip to Step 4 - the script will ask you for setup details!
|
|
|
|
### Step 4: Configure Your API Key
|
|
The easiest way:
|
|
1. Create a file called `.env` in the debate-bots folder
|
|
2. Open it with any text editor (Notepad, TextEdit, etc.)
|
|
3. Add this line, replacing with your actual key:
|
|
```
|
|
OPENROUTER_API_KEY=sk-or-v1-your-key-here
|
|
```
|
|
4. Save the file
|
|
|
|
### Step 5: Create Your Configuration
|
|
1. Copy the file `config.example.yaml`
|
|
2. Rename the copy to `config.yaml`
|
|
3. Open `config.yaml` in a text editor
|
|
4. You'll see two agents - you can leave them as-is or change the models/prompts
|
|
5. Save the file (no need to add your API key here - it's already in `.env`!)
|
|
|
|
### Step 6: Start Your First Debate!
|
|
|
|
**Mac/Linux users:**
|
|
```bash
|
|
./run.sh
|
|
```
|
|
|
|
**Windows users (or if you prefer):**
|
|
```bash
|
|
python -m src.main
|
|
```
|
|
|
|
The app will ask you:
|
|
- **What topic to debate?** (e.g., "Pineapple belongs on pizza")
|
|
- After each round, you can continue, give instructions, or quit
|
|
|
|
That's it! You'll see the two AI agents debate in real-time. 🎉
|
|
|
|
### Common Issues
|
|
- **"Command not found"**: Make sure Python is installed. Try `python3` instead of `python`, or use `./run.sh` on Mac/Linux
|
|
- **"No module named..."**: Run `pip install -r requirements.txt` again, or just use `./run.sh` which handles this automatically
|
|
- **"Permission denied" for run.sh**: Run `chmod +x run.sh` first to make it executable
|
|
- **"API key invalid"**: Double-check you copied the full key from OpenRouter into `.env`
|
|
- **Nothing streams**: That's okay! The debate still works, just disable streaming with `--no-streaming`
|
|
|
|
---
|
|
|
|
## Detailed Installation (For Advanced Users)
|
|
|
|
1. Clone or download this repository:
|
|
```bash
|
|
cd debate-bots
|
|
```
|
|
|
|
2. Install dependencies:
|
|
```bash
|
|
pip install -r requirements.txt
|
|
```
|
|
|
|
3. Configure your agents (see Configuration section below)
|
|
|
|
## Configuration
|
|
|
|
### Option 1: Environment Variables (Recommended for Security)
|
|
|
|
1. Copy the example environment file:
|
|
```bash
|
|
cp .env.example .env
|
|
```
|
|
|
|
2. Edit `.env` with your API keys:
|
|
```bash
|
|
# .env file
|
|
OPENROUTER_API_KEY=your_openrouter_api_key_here
|
|
LOG_LEVEL=INFO
|
|
LOG_FILE=debates.log # Optional: enable file logging
|
|
```
|
|
|
|
3. Create a minimal `config.yaml` without API keys:
|
|
```yaml
|
|
agent1:
|
|
provider: openrouter
|
|
model: anthropic/claude-3-haiku
|
|
system_prompt: You are a logical and evidence-based debater.
|
|
|
|
agent2:
|
|
provider: openrouter
|
|
model: meta-llama/llama-3.1-8b-instruct
|
|
system_prompt: You are a persuasive and rhetorical debater.
|
|
```
|
|
|
|
The application will automatically use API keys from environment variables, keeping them secure.
|
|
|
|
### Option 2: Config File Only
|
|
|
|
Copy the example configuration:
|
|
```bash
|
|
cp config.example.yaml config.yaml
|
|
```
|
|
|
|
Edit `config.yaml` with your settings:
|
|
```yaml
|
|
agent1:
|
|
provider: openrouter
|
|
model: anthropic/claude-3-haiku
|
|
system_prompt: You are a logical and evidence-based debater.
|
|
api_key: your-api-key-here # Not recommended - use .env instead
|
|
|
|
agent2:
|
|
provider: openrouter
|
|
model: meta-llama/llama-3.1-8b-instruct
|
|
system_prompt: You are a persuasive and rhetorical debater.
|
|
api_key: your-api-key-here
|
|
```
|
|
|
|
**Note**: The application will warn you about storing API keys in config files and suggest using environment variables instead.
|
|
|
|
### Option 3: Interactive Setup
|
|
|
|
Simply run the application without any configuration, and it will prompt you for all necessary information:
|
|
```bash
|
|
python -m src.main
|
|
```
|
|
|
|
## Usage
|
|
|
|
### Basic Usage
|
|
|
|
Run the application:
|
|
```bash
|
|
python -m src.main
|
|
```
|
|
|
|
The application will:
|
|
1. Load or prompt for configuration
|
|
2. Ask for a debate topic
|
|
3. Randomly assign positions (for/against) to each agent
|
|
4. Run exchanges between the agents
|
|
5. Pause after each round and show statistics
|
|
6. Automatically save the debate (unless disabled)
|
|
|
|
### Command-Line Options
|
|
|
|
The application supports extensive CLI arguments:
|
|
|
|
```bash
|
|
python -m src.main [OPTIONS]
|
|
```
|
|
|
|
**Options:**
|
|
|
|
- `--config, -c PATH` - Path to configuration file (default: config.yaml)
|
|
- `--topic, -t TEXT` - Debate topic (skips interactive prompt)
|
|
- `--exchanges, -e NUMBER` - Exchanges per round (default: 10)
|
|
- `--no-auto-save` - Disable automatic saving after each round
|
|
- `--no-streaming` - Disable streaming responses (show complete responses at once instead of real-time streaming)
|
|
- `--log-level LEVEL` - Logging level: DEBUG, INFO, WARNING, ERROR, CRITICAL
|
|
- `--log-file PATH` - Log to file (default: console only)
|
|
- `--max-memory-tokens NUMBER` - Maximum tokens to keep in agent memory
|
|
|
|
**Examples:**
|
|
|
|
```bash
|
|
# Basic usage with defaults
|
|
python -m src.main
|
|
|
|
# Specify topic and exchanges
|
|
python -m src.main --topic "AI is beneficial" --exchanges 5
|
|
|
|
# Enable debug logging to file
|
|
python -m src.main --log-level DEBUG --log-file debug.log
|
|
|
|
# Disable auto-save for manual control
|
|
python -m src.main --no-auto-save
|
|
|
|
# Disable streaming for slower connections
|
|
python -m src.main --no-streaming
|
|
|
|
# Use custom config and memory limit
|
|
python -m src.main --config my_config.yaml --max-memory-tokens 50000
|
|
|
|
# Quick debate with all options
|
|
python -m src.main -t "Climate change" -e 3 --log-level INFO
|
|
```
|
|
|
|
### User Options (After Each Round)
|
|
|
|
After each round, the application displays:
|
|
- Brief statistics (time, exchanges, average response time)
|
|
- Interactive menu with options:
|
|
|
|
1. **Continue** - Run another round of exchanges
|
|
2. **Settle** - Provide your conclusion to end the debate
|
|
3. **Give instructions** - Provide custom instructions to both agents
|
|
4. **Save and quit** - End the debate
|
|
|
|
### Statistics Display
|
|
|
|
After completing the debate, you'll see comprehensive statistics:
|
|
|
|
- **Total Duration**: How long the debate lasted
|
|
- **Total Exchanges**: Number of argument exchanges
|
|
- **Response Times**: Average, minimum, and maximum
|
|
- **Memory Usage**: Token count and percentage for each agent
|
|
|
|
### Provider Configuration
|
|
|
|
#### OpenRouter
|
|
|
|
```yaml
|
|
agent1:
|
|
provider: openrouter
|
|
model: anthropic/claude-3-haiku # or any OpenRouter model
|
|
api_key: your-openrouter-api-key
|
|
system_prompt: Your custom prompt here
|
|
```
|
|
|
|
Popular OpenRouter models:
|
|
- `anthropic/claude-3-haiku` (fast and affordable)
|
|
- `anthropic/claude-3-sonnet` (balanced)
|
|
- `meta-llama/llama-3.1-8b-instruct` (open source)
|
|
- `google/gemini-pro` (Google's model)
|
|
|
|
#### LMStudio
|
|
|
|
```yaml
|
|
agent1:
|
|
provider: lmstudio
|
|
model: your-loaded-model-name
|
|
base_url: http://localhost:1234/v1 # default LMStudio URL
|
|
system_prompt: Your custom prompt here
|
|
```
|
|
|
|
Before using LMStudio:
|
|
1. Download and install LMStudio
|
|
2. Load a model in LMStudio
|
|
3. Start the local server (usually on port 1234)
|
|
4. Use the model name as shown in LMStudio
|
|
|
|
### Example Debate Topics
|
|
|
|
- "Artificial Intelligence will be net positive for humanity"
|
|
- "Remote work is better than office work"
|
|
- "Nuclear energy should replace fossil fuels"
|
|
- "Universal Basic Income should be implemented"
|
|
- "Space exploration is worth the investment"
|
|
|
|
## Project Structure
|
|
|
|
```
|
|
debate-bots/
|
|
├── src/
|
|
│ ├── __init__.py
|
|
│ ├── main.py # Application entry point with CLI
|
|
│ ├── agent.py # Agent class with memory management
|
|
│ ├── debate.py # Debate orchestrator with statistics
|
|
│ ├── config.py # Configuration with env var support
|
|
│ ├── ui.py # Terminal UI with statistics display
|
|
│ ├── logger.py # Logging configuration
|
|
│ ├── constants.py # Application constants
|
|
│ ├── exceptions.py # Custom exception classes
|
|
│ ├── utils/
|
|
│ │ ├── __init__.py
|
|
│ │ └── token_counter.py # Token counting and management
|
|
│ └── providers/
|
|
│ ├── __init__.py
|
|
│ ├── base.py # Base provider with retry logic
|
|
│ ├── openrouter.py # OpenRouter with error handling
|
|
│ └── lmstudio.py # LMStudio with error handling
|
|
├── tests/ # Test suite
|
|
│ ├── __init__.py
|
|
│ ├── conftest.py # Pytest fixtures
|
|
│ ├── test_config.py # Configuration tests
|
|
│ ├── test_agent.py # Agent tests
|
|
│ └── test_token_counter.py # Token counter tests
|
|
├── debates/ # Saved debate histories (auto-created)
|
|
├── .env # Environment variables (gitignored)
|
|
├── .env.example # Example environment variables
|
|
├── .gitignore # Git ignore patterns
|
|
├── config.yaml # Your configuration (gitignored)
|
|
├── config.example.yaml # Example configuration
|
|
├── requirements.txt # Python dependencies
|
|
├── README.md # This file
|
|
└── product-brief.md # Original product specification
|
|
```
|
|
|
|
## Saved Debates
|
|
|
|
Debates are automatically saved (unless disabled with `--no-auto-save`) in the `debates/` directory as JSON files with comprehensive statistics:
|
|
|
|
```json
|
|
{
|
|
"topic": "Your debate topic",
|
|
"timestamp": "2024-01-15T10:30:00",
|
|
"agents": {
|
|
"agent1": {"name": "Agent 1", "position": "for"},
|
|
"agent2": {"name": "Agent 2", "position": "against"}
|
|
},
|
|
"exchanges": [
|
|
{
|
|
"exchange": 1,
|
|
"agent": "Agent 1",
|
|
"position": "for",
|
|
"content": "Opening argument..."
|
|
}
|
|
],
|
|
"total_exchanges": 20,
|
|
"statistics": {
|
|
"total_exchanges": 20,
|
|
"elapsed_time_seconds": 245.3,
|
|
"average_response_time_seconds": 12.2,
|
|
"agent1_memory": {
|
|
"message_count": 42,
|
|
"current_tokens": 15234,
|
|
"token_usage_percentage": 15.2
|
|
},
|
|
"agent2_memory": {
|
|
"message_count": 42,
|
|
"current_tokens": 14987,
|
|
"token_usage_percentage": 15.0
|
|
}
|
|
}
|
|
}
|
|
```
|
|
|
|
### Converting Debates to Markdown
|
|
|
|
Use the included converter script:
|
|
|
|
```bash
|
|
# Convert a single debate
|
|
python json_to_markdown.py debates/debate_topic_20240115.json
|
|
|
|
# Convert all debates
|
|
python json_to_markdown.py --all
|
|
```
|
|
|
|
## Customization
|
|
|
|
### System Prompts
|
|
|
|
System prompts define your agent's personality and debate style. Examples:
|
|
|
|
**Logical debater:**
|
|
```
|
|
You are a skilled debater who values logic, evidence, and rational argumentation.
|
|
You cite sources and use structured reasoning.
|
|
```
|
|
|
|
**Emotional debater:**
|
|
```
|
|
You are a persuasive speaker who uses storytelling, analogies, and emotional
|
|
appeals to make your points compelling and relatable.
|
|
```
|
|
|
|
**Devil's advocate:**
|
|
```
|
|
You are a contrarian thinker who finds flaws in arguments and plays devil's
|
|
advocate, always questioning assumptions.
|
|
```
|
|
|
|
### Exchanges Per Round
|
|
|
|
Use the `--exchanges` CLI argument:
|
|
|
|
```bash
|
|
python -m src.main --exchanges 20
|
|
```
|
|
|
|
Or add to your config file:
|
|
|
|
```yaml
|
|
# In config.yaml (if you modify main.py to read this setting)
|
|
exchanges_per_round: 20
|
|
```
|
|
|
|
### Memory Management
|
|
|
|
Control agent memory limits:
|
|
|
|
```bash
|
|
# Limit memory to 50,000 tokens per agent
|
|
python -m src.main --max-memory-tokens 50000
|
|
```
|
|
|
|
Agents will automatically truncate old messages when approaching the limit while preserving the system message.
|
|
|
|
## Testing
|
|
|
|
The project includes a comprehensive test suite:
|
|
|
|
```bash
|
|
# Install test dependencies (already in requirements.txt)
|
|
pip install pytest pytest-cov
|
|
|
|
# Run all tests
|
|
pytest
|
|
|
|
# Run with coverage report
|
|
pytest --cov=src --cov-report=html
|
|
|
|
# Run specific test file
|
|
pytest tests/test_config.py
|
|
|
|
# Run with verbose output
|
|
pytest -v
|
|
```
|
|
|
|
Test coverage includes:
|
|
- Configuration management and environment variables
|
|
- Agent memory management and truncation
|
|
- Token counting and limits
|
|
- Provider error handling
|
|
- Custom exceptions
|
|
|
|
## Troubleshooting
|
|
|
|
### "Cannot connect to OpenRouter"
|
|
- Check your API key is correct
|
|
- Verify you have credits on your OpenRouter account
|
|
- Check your internet connection
|
|
|
|
### "Cannot connect to LMStudio"
|
|
- Ensure LMStudio is running
|
|
- Verify the local server is started in LMStudio
|
|
- Check the base_url in your config (default: http://localhost:1234/v1)
|
|
- Confirm a model is loaded in LMStudio
|
|
|
|
### Debates are too short/long
|
|
- Use `--exchanges` CLI argument to adjust exchanges per round
|
|
- Use the "continue" option to extend debates
|
|
- Use custom instructions to guide the discussion
|
|
|
|
### Memory/Token Issues
|
|
- Monitor memory usage in the statistics display
|
|
- Adjust `--max-memory-tokens` if agents are truncating too aggressively
|
|
- Check logs with `--log-level DEBUG` for detailed token information
|
|
|
|
### Logging Issues
|
|
- Set `--log-level DEBUG` for detailed troubleshooting
|
|
- Use `--log-file` to save logs for later analysis
|
|
- Check the console for real-time error messages
|
|
|
|
## License
|
|
|
|
This project is open source. Feel free to modify and distribute.
|
|
|
|
## Contributing
|
|
|
|
Contributions welcome! The codebase now includes:
|
|
|
|
✅ **Already Implemented:**
|
|
- Comprehensive error handling and logging
|
|
- Automatic memory management with token counting
|
|
- Environment variable support for security
|
|
- CLI arguments for all options
|
|
- Auto-save functionality
|
|
- Statistics tracking and display
|
|
- Test infrastructure with pytest
|
|
- Markdown export (json_to_markdown.py)
|
|
- Retry logic with exponential backoff
|
|
- Response validation
|
|
|
|
**Ideas for Future Enhancements:**
|
|
- Support for more LLM providers (Anthropic direct API, OpenAI, Hugging Face)
|
|
- Web interface (FastAPI/Flask + React)
|
|
- Multi-agent debates (3+ participants)
|
|
- Judge/arbiter agent that scores arguments
|
|
- Debate templates for different formats (Oxford, Lincoln-Douglas, etc.)
|
|
- Export to PDF
|
|
- Real-time streaming responses
|
|
- Debate replay/playback functionality
|
|
- Voice synthesis for debate audio
|
|
|
|
**Development Setup:**
|
|
1. Clone the repository
|
|
2. Install dependencies: `pip install -r requirements.txt`
|
|
3. Run tests: `pytest`
|
|
4. Make your changes
|
|
5. Add tests for new features
|
|
6. Run tests again to ensure everything passes
|
|
7. Submit a pull request
|
|
|
|
## Support
|
|
|
|
For issues and questions, please open an issue on the GitHub repository.
|