Scroll Top

Announcing the OpenCRAVAT MCP Server

We are pleased to announce the release of the OpenCRAVAT MCP server, a new service that makes OpenCRAVAT’s variant annotation capabilities available to AI systems and other tools through the Model Context Protocol (MCP).

MCP is an emerging standard that allows language models and other clients to call structured “tools” exposed by external services. By providing an MCP interface to OpenCRAVAT, we enable AI assistants and automated workflows to retrieve high-quality variant annotations directly, using the same annotation engine and modules that many users already rely on.

What It Can Do

The OpenCRAVAT MCP server isn’t just a “lookup” service.  It’s a variant-interpretation engine that can power genuinely interactive conversations about genetics. Hand it a variant (rsID, HGVS, or genomic coordinates) and it pulls together clinical assertions (e.g., ClinVar/ClinGen, CIViC), population frequencies (e.g., gnomAD and multiple cohorts), and a deep bench of effect predictors (missense, splice, conservation, constraint), plus specialized resources like BRCA1 functional assay scores. That breadth means users can do more than read a score: they can triage risk, and reason about conflicting evidence. In short, it turns raw variants into evidence-backed, explainable, and queryable results that support both automation and thoughtful review (with the usual caveat: it’s a research/decision-support layer, not a substitute for clinical confirmation).

What tools it provides

The OpenCRAVAT MCP server exposes a set of tools that allow clients to:

  • Annotate variants by genomic coordinates (chromosome, position, reference allele, alternate allele)
  • Annotate dbSNP identifiers (rsIDs)
  • Annotate ClinGen Allele Registry identifiers (CAids)
  • Annotate HGVS expressions at the genomic, coding, or protein level
  • Convert protein missense notation (for example, BRAF V600E) into candidate genomic HGVS changes using SynVar
  • Discover which annotators are being run and inspect the output fields they provide

All annotations are generated using the GRCh38 (hg38) reference assembly and are executed against the public OpenCRAVAT annotation service.

Get started

We provide a hosted MCP endpoint at:

https://mcp.opencravat.org/sse

Don’t click on the link just yet; it’s not an HTML page that renders in a web browser. You can use it with either Claude or ChatGPT.

Claude

To connect to Claude, follow these instructions. Use the URL above, and do not set up auth. No user-specific data is needed for the MCP to work. Remember to enable the MCP for your chat. 

ChatGPT

To connect with ChatGPT, you must enable developer mode, then add a custom app:

  • Open Settings
  • Go to Apps
  • In Advanced Settings, use the toggle to turn on “Developer Mode”
  • Click Create App
  • Use the URL above as the “MCP Server URL”
  • Set Authentication to “No Auth”
  • Add a name to the MCP App that you’ll easily recognize, such as “OpenCRAVAT”

You will have to add the MCP to a new chat using the + icon in the lower left of the chat box.

Local Use

The MCP can also be run locally!

The OpenCRAVAT MCP is fully open source and can be run locally. Detailed instructions are available in the project repository, including how to connect the server to clients such as Claude Desktop or other MCP-compatible tools.

Why this matters

By exposing OpenCRAVAT through MCP:

  • AI tools can retrieve authoritative annotations on demand rather than relying on model training data.
  • Developers can integrate variant annotation into agent workflows without building custom APIs.
  • Researchers can prototype AI-assisted analysis pipelines that combine natural language reasoning with established bioinformatics infrastructure.

We welcome feedback, issues, and contributions from the community. This release is an early step toward making variant annotation more accessible in AI-driven workflows, and we expect to expand both tool coverage and configuration options over time.

If you have questions or suggestions, please reach out by posting in the Issues section of our GitHub repository.