Overview
mcp-openai-gemini-llama-example Introduction
mcp-openai-gemini-llama-example is a basic example project demonstrating how to build an AI agent using the Model Context Protocol (MCP) with open LLMs like Meta Llama 3, OpenAI, or Google Gemini, along with a SQLite database. It serves as an educational tool rather than a production-ready framework.
How to Use
To use mcp-openai-gemini-llama-example, you need to have Docker installed and running, a Hugging Face account with an access token for the Llama 3 model, and a Google API key for the Gemini model. After cloning the repository and installing the required packages, you can run the agent in interactive mode to interact with the SQLite database.
Key Features
Key features include connecting to an MCP server, loading tools and resources from the MCP server, converting tools into LLM-compatible function calls, and interacting with an LLM using the openai SDK or google-genai SDK.
Where to Use
mcp-openai-gemini-llama-example can be used in educational contexts, research, and development environments where users want to explore AI agents and their interaction with databases using open LLMs.
Use Cases
Use cases include creating interactive database agents for querying information, educational demonstrations of AI capabilities, and prototyping AI applications that leverage LLMs for data interaction.