Complete guide to building production-ready LLM applications with intelligent routing, enterprise analytics, and multi-provider management.
Get started with JustLLMs in seconds. Choose the installation option that fits your needs.
pip install justllmspip install justllms[pdf]Includes PDF export, Redis caching, and advanced analytics
pip install justllms[all]Get your first LLM response in under 30 seconds with automatic provider routing.
from justllms import JustLLM # Initialize with your API keysclient = JustLLM({ "providers": { "openai": {"api_key": "your-openai-key"}, "google": {"api_key": "your-google-key"}, "anthropic": {"api_key": "your-anthropic-key"} }}) # Simple completion - automatically routes to best providerresponse = client.completion.create( messages=[{"role": "user", "content": "Explain quantum computing"}]) print(response.content)print(f"Used provider: {response.provider}")print(f"Cost: ${response.cost:.4f}")JustLLMs automatically chose the best provider based on cost, availability, and performance. No manual provider switching required.
Connect to all major LLM providers with a single, consistent interface.
from justllms import JustLLM client = JustLLM({ "providers": { "openai": { "api_key": "your-openai-key", }, "anthropic": { "api_key": "your-anthropic-key", }, "google": { "api_key": "your-google-key", }, "ollama": { "base_url": "http://localhost:11434" } }, "default_provider": "openai", # Fallback if routing fails "timeout": 30 # Request timeout in seconds})Define tools once using a simple python decorator, and use them seamlessly across OpenAI, Anthropic, or Google without learning different native APIs.
from justllms import JustLLM, tool @tooldef get_weather(location: str) -> dict: """Get weather for a location.""" return {"temperature": 22, "condition": "sunny"} # Works with OpenAI, Anthropic, Google - same code!response = client.completion.create( messages=[{"role": "user", "content": "What's the weather in Paris?"}], tools=[get_weather], provider="openai", # or "anthropic", "google" execute_tools=True) print(response.content) # "The weather in Paris is sunny and 22 degrees."Compare multiple LLM providers and models simultaneously via an interactive CLI tool. Perfect for evaluating prompt performance and cost differences.
# Launch the interactive menujustllms sxsConfigure fallback providers and models for reliability. If an API outage occurs, your requests are cleanly and automatically rerouted.
client = JustLLM({ "providers": { "openai": {"api_key": "your-key"}, "anthropic": {"api_key": "your-key"} }, "routing": { "fallback_provider": "anthropic", "fallback_model": "claude-3-5-sonnet-20241022" }}) # If no model is specified or the primary model fails, uses fallback seamlesslyresponse = client.completion.create( messages=[{"role": "user", "content": "Hello"}])Out-of-the-box support for Server-Side Google Search and Python code execution environments perfectly suited for intelligent autonomous agents.
from justllms import GoogleSearch, GoogleCodeExecution # Server-side Google Search and Python execution out-of-the-boxresponse = client.completion.create( messages=[{"role": "user", "content": "Latest AI news and calculate 2^10"}], tools=[GoogleSearch(), GoogleCodeExecution()], provider="google")