Ready-to-use customizable multi-agent AI system that combines plug-and-play simplicity with framework-level flexibility
🚀 Quick Start • 🤖 Try Demo • 🔧 Configuration • 🎯 Features • 💡 Use Cases
Connect with fellow developers and AI enthusiasts!
evi-run is a powerful, production-ready multi-agent AI system that bridges the gap between out-of-the-box solutions and custom AI frameworks. Built on Python using the OpenAI Agents SDK, the system has an intuitive interface via a Telegram bot and provides enterprise-level artificial intelligence capabilities.
⚠️ Important for Token Swap: The token swap function is only active in private mode. Your private key will be stored in your database in base64 format.
| Component | Technology |
|---|---|
| Core Language | Python 3.11 |
| AI Framework | OpenAI Agents SDK |
| Communication | Model Context Protocol |
| Blockchain | Solana RPC API |
| Interface | Telegram Bot API |
| Database | PostgreSQL |
| Cache | Redis |
| Deployment | Docker & Docker Compose |
Get evi-run running in under 5 minutes with our streamlined Docker setup:
System Requirements:
Required API Keys & Tokens:
⚠️ Important for Image Generation: To use protected OpenAI models (especially for image generation), you need to complete organization verification at OpenAI Organization Settings. This is a simple verification process required by OpenAI.
Download and prepare the project:
# Navigate to installation directory
cd /opt
# Clone the project from GitHub
git clone https://github.com/pipedude/evi-run.git
# Set proper permissions
sudo chown -R $USER:$USER evi-run
cd evi-run
Configure environment variables:
# Copy example configuration
cp .env.example .env
# Edit configuration files
nano .env # Add your API keys and tokens
nano config.py # Set your Telegram ID and preferences
Run automated Docker setup:
# Make setup script executable
chmod +x docker_setup_en.sh
# Run Docker installation
./docker_setup_en.sh
Launch the system:
# Build and start containers
docker compose up --build -d
Verify installation:
# Check running containers
docker compose ps
# View logs
docker compose logs -f
🎉 That's it! Your evi-run system is now live. Open your Telegram bot and start chatting!
.env - Environment Variables# REQUIRED: Telegram Bot Token from @BotFather
TELEGRAM_BOT_TOKEN=your_bot_token_here
# REQUIRED: OpenAI API Key
API_KEY_OPENAI=your_openai_api_key
config.py - System Settings# REQUIRED: Your Telegram User ID
ADMIN_ID = 123456789
# Usage Mode: 'private', 'free', or 'pay'
TYPE_USAGE = 'private'
| Mode | Description | Best For |
|---|---|---|
| Private | Bot owner only | Personal use, development, testing |
| Free | Public access with limits | Community projects, demos |
| Pay | Monetized with balance system | Commercial applications, SaaS |
⚠️ Important for Pay mode: Pay mode enables monetization features. To activate this mode, the owner must burn a certain amount of $EVI tokens. The platform supports custom tokens created on the Solana blockchain for monetization purposes.
Create engaging AI personalities for entertainment, education, or brand representation.
Deploy intelligent support bots that understand context and provide helpful solutions.
Build your own AI companion for productivity, research, and daily tasks.
Automate data processing, generate insights, and create reports from complex datasets.
Launch trading agents for DEX with real-time analytics.
Leverage the framework to build specialized AI agents for any domain or industry.
By default, the system is configured for optimal performance and low cost of use. For professional and specialized use cases, proper model selection is crucial for optimal performance and cost efficiency.
For Deep Research and Complex Analysis:
o3-deep-research - Most powerful deep research model for complex multi-step research taskso4-mini-deep-research - Faster, more affordable deep research modelFor maximum research capabilities using specialized deep research models:
Use o3-deep-research for most powerful analysis in bot/agents_tools/agents_.py:
deep_agent = Agent(
name="Deep Agent",
model="o3-deep-research", # Most powerful deep research model
# ... instructions
)
Alternative: Use o4-mini-deep-research for cost-effective deep research:
deep_agent = Agent(
name="Deep Agent",
model="o4-mini-deep-research", # Faster, more affordable deep research
# ... instructions
)
Update Main Agent instructions to prevent summarization:
For the complete list of available models, capabilities, and pricing, see the OpenAI Models Documentation.
evi-run uses the Agents library with a multi-agent architecture where specialized agents are integrated as tools into the main agent. All agent configuration is centralized in:
bot/agents_tools/agents_.py
1. Create the Agent
# Add after existing agents
custom_agent = Agent(
name="Custom Agent",
instructions="Your specialized agent instructions here...",
model="gpt-5-mini",
model_settings=ModelSettings(
reasoning=Reasoning(effort="low"),
extra_body={"text": {"verbosity": "medium"}}
),
tools=[WebSearchTool(search_context_size="medium")] # Optional tools
)
2. Register as Tool in Main Agent
# In create_main_agent function, add to main_agent.tools list:
main_agent = Agent(
# ... existing configuration
tools=[
# ... existing tools
custom_agent.as_tool(
tool_name="custom_function",
tool_description="Description of what this agent does"
),
]
)
Main Agent (Evi) Personality:
Edit the detailed instructions in the main_agent instructions block:
Agent Parameters:
name: Agent identifierinstructions: System prompt and behaviormodel: OpenAI model (gpt-5, gpt-5-mini, etc.)model_settings: Model settings (Reasoning, extra_body, etc.)tools: Available tools (WebSearchTool, FileSearchTool, etc.)mcp_servers: MCP server connectionsevi-run supports non-OpenAI models through the Agents library. There are several ways to integrate other LLM providers:
Method 1: LiteLLM Integration (Recommended)
Install the LiteLLM dependency:
pip install "openai-agents[litellm]"
Use models with the litellm/ prefix:
# Claude via LiteLLM
claude_agent = Agent(
name="Claude Agent",
instructions="Your instructions here...",
model="litellm/anthropic/claude-3-5-sonnet-20240620",
# ... other parameters
)
# Gemini via LiteLLM
gemini_agent = Agent(
name="Gemini Agent",
instructions="Your instructions here...",
model="litellm/gemini/gemini-2.5-flash-preview-04-17",
# ... other parameters
)
Method 2: LitellmModel Class
from agents.extensions.models.litellm_model import LitellmModel
custom_agent = Agent(
name="Custom Agent",
instructions="Your instructions here...",
model=LitellmModel(model="anthropic/claude-3-5-sonnet-20240620", api_key="your-api-key"),
# ... other parameters
)
Method 3: Global OpenAI Client
from agents.models._openai_shared import set_default_openai_client
from openai import AsyncOpenAI
# For providers with OpenAI-compatible API
set_default_openai_client(AsyncOpenAI(
base_url="https://api.provider.com/v1",
api_key="your-api-key"
))
Documentation & Resources:
Important Notes:
set_tracing_disabled()Customizing Bot Interface Messages:
All bot messages and interface text are stored in the I18N directory and can be fully customized to match your needs:
I18N/
├── factory.py # Translation loader
├── en/
│ └── txt.ftl # English messages
└── ru/
└── txt.ftl # Russian messages
Message Files Format:
The bot uses Fluent localization format (.ftl files) for multi-language support:
To customize messages:
.ftl file in I18N/en/ or I18N/ru/txt.ftl filesevi-run includes comprehensive tracing and analytics capabilities through the OpenAI Agents SDK. The system automatically tracks all agent operations and provides detailed insights into performance and usage.
Automatic Tracking:
⚠️ Important for enabled Tracing: The OpenAI Agents SDK (Tracing) analytics system records all user requests for performance monitoring. Although the data is anonymized, this creates privacy issues.
For ethical reasons, owners of public bots should either explicitly inform users about this, or disable Tracing.
# Disable Tracking in `bot/agents_tools/agents_.py`
set_tracing_disabled(True)
evi-run supports integration with 20+ monitoring and analytics platforms:
Popular Integrations:
Enterprise Solutions:
Docker Container Logs:
# View all logs
docker compose logs
# Follow specific service
docker compose logs -f bot
# Database logs
docker compose logs postgres_agent_db
# Filter by time
docker compose logs --since 1h bot
Bot not responding:
# Check bot container status
docker compose ps
docker compose logs bot
Database connection errors:
# Restart database
docker compose restart postgres_agent_db
docker compose logs postgres_agent_db
Memory issues:
# Check system resources
docker stats
This project is licensed under the MIT License - see the LICENSE file for details.
We welcome contributions! Please see our Contributing Guidelines for details.
Made with ❤️ by the evi-run team
⭐ Star this repository if evi-run helped you build amazing AI experiences! ⭐