Ready-to-use customizable multi-agent AI system that combines plug-and-play simplicity with framework-level flexibility
🚀 Quick Start • 🤖 Try Demo • 🔧 Configuration • 🎯 Features • 💡 Use Cases
evi-run is a powerful, production-ready multi-agent AI system that bridges the gap between out-of-the-box solutions and custom AI frameworks. Built on Python with OpenAI Agents SDK integration, it delivers enterprise-grade AI capabilities through an intuitive Telegram bot interface.
| Component | Technology |
|---|---|
| Core Language | Python 3.9+ |
| AI Framework | OpenAI Agents SDK |
| Communication | MCP (Model Context Protocol) |
| Blockchain | Solana RPC |
| Interface | Telegram Bot API |
| Database | PostgreSQL |
| Cache | Redis |
| Deployment | Docker & Docker Compose |
Get evi-run running in under 5 minutes with our streamlined Docker setup:
System Requirements:
Required API Keys & Tokens:
⚠️ Important for Image Generation: To use protected OpenAI models (especially for image generation), you need to complete organization verification at OpenAI Organization Settings. This is a simple verification process required by OpenAI.
Download and prepare the project:
# Navigate to installation directory
cd /opt
# Clone the project from GitHub
git clone https://github.com/pipedude/evi-run.git
# Set proper permissions
sudo chown -R $USER:$USER evi-run
cd evi-run
Configure environment variables:
# Copy example configuration
cp .env.example .env
# Edit configuration files
nano .env # Add your API keys and tokens
nano config.py # Set your Telegram ID and preferences
Run automated Docker setup:
# Make setup script executable
chmod +x docker_setup_en.sh
# Run Docker installation
./docker_setup_en.sh
Launch the system:
# Build and start containers
docker compose up --build -d
Verify installation:
# Check running containers
docker compose ps
# View logs
docker compose logs -f
🎉 That's it! Your evi-run system is now live. Open your Telegram bot and start chatting!
# Start the system
docker compose up -d
# View logs (follow mode)
docker compose logs -f bot
# Check running containers
docker compose ps
# Stop the system
docker compose down
# Restart specific service
docker compose restart bot
# Update and rebuild
docker compose up --build -d
# View database logs
docker compose logs postgres_agent_db
# Check system resources
docker stats
.env - Environment Variables# REQUIRED: Telegram Bot Token from @BotFather
TELEGRAM_BOT_TOKEN=your_bot_token_here
# REQUIRED: OpenAI API Key
API_KEY_OPENAI=your_openai_api_key
# Payment Integration (for 'pay' mode)
TOKEN_BURN_ADDRESS=your_burn_address
MINT_TOKEN_ADDRESS=your_token_address
TON_ADDRESS=your_ton_address
API_KEY_TON=your_tonapi_key
ADDRESS_SOL=your_sol_address
config.py - System Settings# REQUIRED: Your Telegram User ID
ADMIN_ID = 123456789
# Usage Mode: 'private', 'free', or 'pay'
TYPE_USAGE = 'private'
# Credit System (for 'pay' mode)
CREDITS_USER_DAILY = 20
CREDITS_ADMIN_DAILY = 50
# Language Support
AVAILABLE_LANGUAGES = ['en', 'ru']
DEFAULT_LANGUAGE = 'en'
| Mode | Description | Best For |
|---|---|---|
| Private | Bot owner only | Personal use, development, testing |
| Free | Public access with limits | Community projects, demos |
| Pay | Monetized with balance system | Commercial applications, SaaS |
⚠️ Important for Pay mode: Pay mode enables monetization features and requires activation through project token economics. You can use your own token (created on the Solana blockchain) for monetization.
To activate Pay mode at this time, please contact the project developer (@playa3000) who will guide you through the process.
Note: In future releases, project tokens will be publicly available for purchase, and the activation process will be fully automated through the bot interface.
Create engaging AI personalities for entertainment, education, or brand representation. Perfect for gaming, educational platforms, content creation, and brand engagement.
Deploy intelligent support bots that understand context and provide helpful solutions. Ideal for e-commerce, SaaS platforms, and service-based businesses.
Build your own AI companion for productivity, research, and daily tasks. Great for professionals, researchers, and anyone seeking AI-powered productivity.
Automate data processing, generate insights, and create reports from complex datasets. Excellent for business intelligence, research teams, and data-driven organizations.
Develop sophisticated trading bots for decentralized exchanges with real-time analytics. Suitable for crypto traders, DeFi enthusiasts, and financial institutions.
Leverage the framework to build specialized AI agents for any domain or industry. Unlimited possibilities for healthcare, finance, education, and enterprise applications.
By default, the system is configured for optimal performance and low cost of use. For professional and specialized use cases, proper model selection is crucial for optimal performance and cost efficiency.
For Deep Research and Complex Analysis:
o3-deep-research - Most powerful deep research model for complex multi-step research taskso4-mini-deep-research - Faster, more affordable deep research modelFor maximum research capabilities using specialized deep research models:
Use o3-deep-research for most powerful analysis in bot/agents_tools/agents_.py:
deep_agent = Agent(
name="Deep Agent",
model="o3-deep-research", # Most powerful deep research model
# ... instructions
)
Alternative: Use o4-mini-deep-research for cost-effective deep research:
deep_agent = Agent(
name="Deep Agent",
model="o4-mini-deep-research", # Faster, more affordable deep research
# ... instructions
)
Update Main Agent instructions to prevent summarization:
For the complete list of available models, capabilities, and pricing, see the OpenAI Models Documentation.
evi-run uses the Agents library with a multi-agent architecture where specialized agents are integrated as tools into the main agent. All agent configuration is centralized in:
bot/agents_tools/agents_.py
1. Create the Agent
# Add after existing agents
custom_agent = Agent(
name="Custom Agent",
instructions="Your specialized agent instructions here...",
model="gpt-4o-mini",
tools=[WebSearchTool(search_context_size="medium")] # Optional tools
)
2. Register as Tool in Main Agent
# In create_main_agent function, add to main_agent.tools list:
main_agent = Agent(
# ... existing configuration
tools=[
# ... existing tools
custom_agent.as_tool(
tool_name="custom_function",
tool_description="Description of what this agent does"
),
]
)
Main Agent (Evi) Personality:
Edit the detailed instructions in main_agent creation (lines 58-102):
Agent Parameters:
name: Agent identifierinstructions: System prompt and behaviormodel: OpenAI model (gpt-4o, gpt-4o-mini, etc.)tools: Available tools (WebSearchTool, FileSearchTool, etc.)mcp_servers: MCP server connectionsExample Customization:
# Modify deep_agent for specialized research
deep_agent = Agent(
name="Deep Research Agent",
instructions="""You are a specialized research agent focused on [YOUR DOMAIN].
Provide comprehensive analysis with:
- Multiple perspectives
- Data-driven insights
- Actionable recommendations
Always cite sources when available.""",
model="gpt-4o",
tools=[WebSearchTool(search_context_size="high")]
)
As Tool Integration:
# Agents become tools via .as_tool() method
dynamic_agent.as_tool(
tool_name="descriptive_name",
tool_description="Clear description for main agent"
)
evi-run supports non-OpenAI models through the Agents library. There are several ways to integrate other LLM providers:
Method 1: LiteLLM Integration (Recommended)
Install the LiteLLM dependency:
pip install "openai-agents[litellm]"
Use models with the litellm/ prefix:
# Claude via LiteLLM
claude_agent = Agent(
name="Claude Agent",
instructions="Your instructions here...",
model="litellm/anthropic/claude-3-5-sonnet-20240620",
# ... other parameters
)
# Gemini via LiteLLM
gemini_agent = Agent(
name="Gemini Agent",
instructions="Your instructions here...",
model="litellm/gemini/gemini-2.5-flash-preview-04-17",
# ... other parameters
)
Method 2: LitellmModel Class
from agents.extensions.models.litellm_model import LitellmModel
custom_agent = Agent(
name="Custom Agent",
instructions="Your instructions here...",
model=LitellmModel(model="anthropic/claude-3-5-sonnet-20240620", api_key="your-api-key"),
# ... other parameters
)
Method 3: Global OpenAI Client
from agents.models._openai_shared import set_default_openai_client
from openai import AsyncOpenAI
# For providers with OpenAI-compatible API
set_default_openai_client(AsyncOpenAI(
base_url="https://api.provider.com/v1",
api_key="your-api-key"
))
Documentation & Resources:
Important Notes:
set_tracing_disabled()Customizing Bot Interface Messages:
All bot messages and interface text are stored in the I18N directory and can be fully customized to match your needs:
I18N/
├── factory.py # Translation loader
├── en/
│ └── txt.ftl # English messages
└── ru/
└── txt.ftl # Russian messages
Message Files Format:
The bot uses Fluent localization format (.ftl files) for multi-language support:
To customize messages:
.ftl file in I18N/en/ or I18N/ru/txt.ftl filesevi-run includes comprehensive tracing and analytics capabilities through the OpenAI Agents SDK. The system automatically tracks all agent operations and provides detailed insights into performance and usage.
Automatic Tracking:
evi-run supports integration with 20+ monitoring and analytics platforms:
Popular Integrations:
Enterprise Solutions:
Docker Container Logs:
# View all logs
docker compose logs
# Follow specific service
docker compose logs -f bot
# Database logs
docker compose logs postgres_agent_db
# Filter by time
docker compose logs --since 1h bot
Bot not responding:
# Check bot container status
docker compose ps
docker compose logs bot
Database connection errors:
# Restart database
docker compose restart postgres_agent_db
docker compose logs postgres_agent_db
Memory issues:
# Check system resources
docker stats
This project is licensed under the MIT License - see the LICENSE file for details.
We welcome contributions! Please see our Contributing Guidelines for details.
Made with ❤️ by the evi-run team
⭐ Star this repository if evi-run helped you build amazing AI experiences! ⭐