🦍 The API and AI Gateway
-
Updated
Jan 19, 2026 - Lua
🦍 The API and AI Gateway
Run any open-source LLMs, such as DeepSeek and Llama, as OpenAI compatible API endpoint in the cloud.
Deploy serverless AI workflows at scale. Firebase for AI agents
AutoRAG: An Open-Source Framework for Retrieval-Augmented Generation (RAG) Evaluation & Optimization with AutoML-Style Automation
RAG (Retrieval Augmented Generation) Framework for building modular, open source applications for production by TrueFoundry
The open LLM Ops platform - Traces, Analytics, Evaluations, Datasets and Prompt Optimization ✨
The collaborative spreadsheet for AI. Chain cells into powerful pipelines, experiment with prompts and models, and evaluate LLM responses in real-time. Work together seamlessly to build and iterate on AI applications.
AIConfig is a config-based framework to build generative AI applications.
Your autonomous engineering team in a CLI. Point Zeroshot at an issue, walk away, and return to production-grade code. Supports Claude Code, OpenAI Codex, OpenCode, and Gemini CLI.
Python SDK for running evaluations on LLM generated responses
An end-to-end LLM reference implementation providing a Q&A interface for Airflow and Astronomer
[⛔️ DEPRECATED] Friendli: the fastest serving engine for generative AI
Quality Control for AI Artifact Management
The context API for AI agents
Miscellaneous codes and writings for MLOps
🔍 AI observability skill for Claude Code. Debug LangChain/LangGraph agents by fetching execution traces from LangSmith Studio directly in your terminal.
Deployment of RAG + LLM model serving on multiple K8s cloud clusters
Add a description, image, and links to the llm-ops topic page so that developers can more easily learn about it.
To associate your repository with the llm-ops topic, visit your repo's landing page and select "manage topics."