Open Source Featured

AI Bridge MCP Server

Impact Summary

Built a secure Model Context Protocol (MCP) server that lets Claude Code talk to OpenAI and Google Gemini through a single hardened gateway, adding multi-layer security, robust logging, and flexible configuration.

Role

Creator & Maintainer

Timeline

2025–Present

Scale

  • Multi-model
  • MCP ecosystem
  • Security-focused

Links

Internal / Confidential

Problem

Most developers wire Claude Code or other MCP clients directly to model APIs like OpenAI or Gemini. That works for experiments, but it doesn’t scale well for secure, multi-model usage:

  • Every environment must manage its own credentials and config.
  • There’s usually no consistent security layer for prompt injection detection, content filtering, or rate limiting.
  • Error handling and logging are often ad hoc, making production debugging painful.

The result is a tangle of one-off configs and half-baked security controls wrapped around extremely powerful APIs.

Approach

I built AI Bridge, a dedicated MCP server that sits between Claude Code and major LLM providers, enforcing a consistent, hardened surface area for AI access.

At its core, AI Bridge exposes tools like ask_openai and ask_gemini over the Model Context Protocol, while encapsulating all the messy bits of configuration, validation, and safety behind a single server process.

Key Design Elements

  • Multi-provider model access

    • OpenAI: GPT-4o family, reasoning models (o-series), and other compatible endpoints.
    • Google Gemini: 1.5 Pro / 1.5 Flash and vision-capable models.
  • Multi-layer security

    • Input validation and sanitization for incoming prompts.
    • Configurable content filtering to block explicit, violent, or illegal content.
    • Prompt injection detection to catch “ignore previous instructions” and similar attacks.
    • Rate limiting to prevent API abuse, with tunable thresholds.
  • Strong configuration story

    • Multiple options for secrets: ~/.env, local .env, or process env vars.
    • Single configuration surface controlling logging level, server identity, security levels, and performance options (e.g., pattern caching, scan depth).
  • Operational robustness

    • Winston-based structured logging for observability.
    • Specific error types and messages that avoid leaking sensitive details.
    • Jest test suite and GitHub workflows to keep the server reliable.

The goal is simple: centralize the risk and make it easier to build safe AI workflows on top of multiple model providers.

Outcomes

  • Developers can add a single MCP server to Claude Code and immediately query both OpenAI and Gemini under a consistent security policy.
  • Security posture improves by default: rate limits, prompt injection detection, and content filtering are all first-class configuration options rather than afterthoughts.
  • Teams get a clear separation of concerns:
    • MCP client config focuses on “what tools do I have?”
    • AI Bridge focuses on “how do we safely talk to these providers?”
  • The project is discoverable in the MCP ecosystem (registries and directories), signaling both usability and real-world adoption.

Key Contributions

  • Designed and implemented a secure MCP server that unifies access to OpenAI and Gemini.
  • Implemented layered prompt validation, content filtering, and rate limiting as configurable primitives instead of hard-coded checks.
  • Built a clean configuration model (global and local .env, Claude Code integration) that reduces setup friction.
  • Added structured logging, targeted error messages, and test coverage to support real-world usage and debugging.
  • Documented installation and MCP configuration flows so that developers can go from clone → configured → usable in minutes.