Skip to content

RapidAI ⚡

The Python Framework for Lightning-Fast AI Prototypes

Build production-ready AI applications in under an hour. Zero-config LLM integration, streaming by default, batteries included.

✨ Features

🚀 Zero-Config LLM Integration

Built-in clients for OpenAI, Anthropic, Cohere, and local models with a unified interface. Swap providers with one line of code.

📡 Streaming by Default

Server-Sent Events (SSE) and WebSocket streaming built into routes, not bolted on. Real-time AI responses out of the box.

🧠 Smart Memory

Conversation tracking per user with multiple backend options. Redis, PostgreSQL, or in-memory - you choose.

💾 Intelligent Caching

LLM response caching that understands semantic similarity. Save money and improve response times automatically.

📝 Prompt Management

Version, test, and swap prompts without code changes. Jinja2 templating with hot reloading in development.

🔍 RAG in Minutes

Document parsing, vector DB, and retrieval with 2-line setup. Built-in support for PDFs, DOCX, and more.

🎯 Quick Example

app.py
from rapidai import App, LLM, stream

app = App()
llm = LLM("claude-3-haiku-20240307")

@app.route("/chat", methods=["POST"])
@stream
async def chat(message: str):
    """Stream a chat response."""
    response = await llm.chat(message, stream=True)
    async for chunk in response:
        yield chunk

if __name__ == "__main__":
    app.run()
python app.py
Test it
curl -X POST http://localhost:8000/chat \
  -H "Content-Type: application/json" \
  -d '{"message": "Hello, AI!"}'

🌟 Why RapidAI?

Convention over Configuration

Sensible defaults everywhere. Get started in minutes, not hours.

Provider Agnostic

Never get locked into a single LLM provider. Switch between OpenAI, Anthropic, or local models with ease.

Async-First

Built from the ground up with async/await for maximum performance.

Type-Safe

Full type hints for excellent IDE support and fewer runtime errors.

Production Ready

Error handling, rate limiting, monitoring, and deployment templates included.

📊 Perfect For

  • 🚀 Rapid POCs - Test AI features in minutes
  • 🏢 Internal Tools - Build dashboards and automation
  • 💬 Chat Interfaces - Customer support and assistants
  • 📚 RAG Applications - Document Q&A systems
  • 🔄 Document Processing - Automated pipelines
  • 🌐 AI-Powered APIs - Production-grade endpoints

🎓 Learn More

📘 Tutorial

Step-by-step guide from simple chatbot to production deployment.

Start Learning →

📖 API Reference

Complete documentation of all classes, methods, and decorators.

Browse Reference →

🚀 Deployment

Deploy to Docker, AWS, GCP, Azure, and more with confidence.

Deploy Now →

🤝 Community

📄 License

RapidAI is licensed under the MIT License. See LICENSE for details.


Ready to build something amazing?

Get Started Now →