CLI Tool¶
RapidAI includes a powerful CLI tool for scaffolding projects, running development servers, and deploying to cloud platforms.
Installation¶
The CLI is installed automatically with RapidAI:
Verify installation:
Quick Start¶
Create a new chatbot project:
Your app is now running at http://localhost:8000!
Commands¶
rapidai new¶
Create a new RapidAI project from a template.
Arguments:
project-name- Name of the project to create
Options:
-t, --template- Template to use (chatbot, rag, agent, api) [default: chatbot]-d, --directory- Directory to create project in [default: .]
Examples:
# Create a chatbot project
rapidai new my-bot
# Create a RAG project
rapidai new doc-qa --template rag
# Create in a specific directory
rapidai new my-api --template api --directory ~/projects
Templates:
- chatbot - Simple chatbot with conversation memory
- rag - RAG application with document upload and Q&A
- agent - AI agent with analysis and generation endpoints
- api - REST API with authentication and CORS
rapidai dev¶
Run the development server with hot reload.
Options:
-p, --port- Port to run on [default: 8000]-h, --host- Host to bind to [default: 127.0.0.1]--reload/--no-reload- Enable/disable auto-reload [default: reload]-a, --app- Application module path [default: app:app]
Examples:
# Run on default port 8000
rapidai dev
# Run on custom port
rapidai dev --port 3000
# Disable hot reload
rapidai dev --no-reload
# Custom app module
rapidai dev --app myapp:application
Features:
- Auto-reload on file changes
- Colored console output
- Clear error messages
- Powered by Uvicorn
rapidai deploy¶
Deploy your application to a cloud platform.
Arguments:
platform- Cloud platform (fly, heroku, vercel, aws)
Options:
-n, --app-name- Application name for deployment-r, --region- Region to deploy to
Examples:
# Deploy to Fly.io
rapidai deploy fly
# Deploy to Heroku with app name
rapidai deploy heroku --app-name my-app
# Deploy to Vercel
rapidai deploy vercel
# Deploy to AWS (shows instructions)
rapidai deploy aws
Supported Platforms:
- Fly.io - Automatic deployment with generated fly.toml
- Heroku - Git-based deployment with Procfile
- Vercel - Serverless deployment with vercel.json
- AWS - Manual deployment instructions
rapidai test¶
Run tests for your application.
Options:
--coverage/--no-coverage- Run with coverage [default: coverage]-v, --verbose- Verbose output
Examples:
# Run tests with coverage
rapidai test
# Run without coverage
rapidai test --no-coverage
# Verbose output
rapidai test --verbose
Requirements:
- pytest must be installed
- Tests should be in
tests/directory
rapidai docs¶
Generate and serve project documentation.
Options:
--serve/--build- Serve locally or build for production [default: serve]-p, --port- Port to serve docs on [default: 8001]
Examples:
# Serve docs locally
rapidai docs
# Build for production
rapidai docs --build
# Serve on custom port
rapidai docs --port 3000
Requirements:
- mkdocs and mkdocs-material must be installed
- Docs should be in
docs/directory
Project Templates¶
Chatbot Template¶
A simple chatbot with conversation memory.
Features:
- Conversation memory per user
- Clear history endpoint
- Environment-based configuration
- Claude or GPT support
Files created:
my-chatbot/
├── app.py # Main application
├── .env # Environment variables
├── requirements.txt # Dependencies
├── README.md # Project documentation
└── tests/ # Test directory
Endpoints:
POST /chat- Chat with the botPOST /clear- Clear conversation history
RAG Template¶
RAG application with document upload and Q&A.
Features:
- Upload PDFs, DOCX, TXT, HTML, Markdown
- Semantic search with embeddings
- Context-aware answers
- ChromaDB vector storage
Files created:
my-rag/
├── app.py # Main application
├── .env # Environment variables
├── requirements.txt # Dependencies (includes RAG extras)
├── README.md # Project documentation
├── tests/ # Test directory
└── docs/ # Document storage
Endpoints:
POST /upload- Upload documentsPOST /ask- Ask questionsPOST /search- Search documents
Agent Template¶
AI agent with analysis and content generation.
Features:
- Text analysis with caching
- Content generation
- Interactive agent chat
- Configurable styles
Files created:
my-agent/
├── app.py # Main application
├── .env # Environment variables
├── requirements.txt # Dependencies
├── README.md # Project documentation
└── tests/ # Test directory
Endpoints:
POST /analyze- Analyze textPOST /generate- Generate contentPOST /chat- Interactive chat
API Template¶
REST API with authentication and CORS.
Features:
- API key authentication
- CORS enabled
- Health check endpoint
- Clean REST design
Files created:
my-api/
├── app.py # Main application
├── .env # Environment variables
├── requirements.txt # Dependencies
├── README.md # Project documentation
└── tests/ # Test directory
Endpoints:
GET /- API informationPOST /complete- Complete promptsPOST /chat- Chat endpointGET /health- Health check
Deployment Guide¶
Fly.io Deployment¶
- Install flyctl:
- Login:
- Deploy:
What happens:
- Generates
fly.tomlconfiguration - Creates or updates Fly.io app
- Deploys using Paketo buildpacks
- Configures health checks
- Provides deployment URL
Environment variables:
Set secrets with:
Heroku Deployment¶
- Install Heroku CLI:
# macOS
brew install heroku/brew/heroku
# Or download from https://devcenter.heroku.com/articles/heroku-cli
- Login:
- Deploy:
- Push code:
What happens:
- Generates
Procfile - Creates Heroku app
- Sets Python buildpack
- Provides deployment instructions
Environment variables:
Vercel Deployment¶
- Install Vercel CLI:
- Login:
- Deploy:
What happens:
- Generates
vercel.jsonconfiguration - Deploys as serverless function
- Provides deployment URL
Environment variables:
Set in Vercel dashboard or:
AWS Deployment¶
AWS deployment is manual. Use:
This provides instructions for:
- AWS Lambda + API Gateway (serverless)
- AWS ECS/Fargate (containers)
- AWS Elastic Beanstalk (platform)
Development Workflow¶
1. Create Project¶
2. Install Dependencies¶
3. Configure Environment¶
Edit .env:
4. Run Development Server¶
5. Test¶
6. Deploy¶
Best Practices¶
Project Structure¶
my-project/
├── app.py # Main application
├── .env # Environment variables (gitignored)
├── .env.example # Example environment variables
├── requirements.txt # Dependencies
├── README.md # Documentation
├── tests/ # Tests
│ ├── __init__.py
│ └── test_app.py
├── prompts/ # Prompt templates (optional)
└── docs/ # Documents for RAG (optional)
Environment Variables¶
Always use .env for secrets:
Add .env.example for documentation:
Testing¶
Create tests in tests/:
# tests/test_app.py
import pytest
from rapidai.testing import TestClient
def test_chat():
from app import app
client = TestClient(app)
response = client.post("/chat", json={"user_id": "test", "message": "hi"})
assert response.status_code == 200
Run tests:
Documentation¶
Keep README.md updated:
# My Project
## Setup
1. Install dependencies
2. Set environment variables
3. Run the dev server
## Endpoints
- POST /chat - Description
- GET /health - Description
Troubleshooting¶
Port Already in Use¶
Module Not Found¶
# Make sure you're in the project directory
cd my-project
# Specify the module path
rapidai dev --app app:app
Deployment Fails¶
# Check requirements.txt
cat requirements.txt
# Ensure all files are committed
git status
git add .
git commit -m "Deploy"
Missing Dependencies¶
# Install all dependencies
pip install -r requirements.txt
# For development
pip install rapidai[dev]
Next Steps¶
- See API Reference for complete CLI API documentation
- Check Configuration for environment variables
- Learn about Deployment best practices