MCP Server for AI Coding Agents
27 SEO tools for Claude Code, Cursor, VS Code, Windsurf, and ChatGPT. Crawl sites, score pages across 37 AI-readiness factors, and fix issues — all from your IDE via the Model Context Protocol.
Quick Start
1. Generate an API token at Dashboard → Settings → API Tokens. 2. Add the MCP server to your IDE.
Claude Code
claude mcp add llm-boost \
--env LLM_BOOST_API_TOKEN=llmb_xxx \
-- npx -y @llmrank.app/mcpCursor
~/.cursor/mcp.json{
"mcpServers": {
"llm-boost": {
"command": "npx",
"args": [
"-y",
"@llmrank.app/mcp"
],
"env": {
"LLM_BOOST_API_TOKEN": "llmb_xxx"
}
}
}
}Claude Desktop
claude_desktop_config.json{
"mcpServers": {
"llm-boost": {
"command": "npx",
"args": [
"-y",
"@llmrank.app/mcp"
],
"env": {
"LLM_BOOST_API_TOKEN": "llmb_xxx"
}
}
}
}VS Code
.vscode/mcp.json{
"servers": {
"llm-boost": {
"command": "npx",
"args": [
"-y",
"@llmrank.app/mcp"
],
"env": {
"LLM_BOOST_API_TOKEN": "llmb_xxx"
}
}
}
}Windsurf
~/.codeium/windsurf/mcp_config.json{
"mcpServers": {
"llm-boost": {
"command": "npx",
"args": [
"-y",
"@llmrank.app/mcp"
],
"env": {
"LLM_BOOST_API_TOKEN": "llmb_xxx"
}
}
}
}ChatGPT / HTTP
Endpoint: https://mcp.llmrank.app/v1/mcp
Auth: OAuth 2.1 with PKCE (auto-discovered via /.well-known/oauth-authorization-server)Environment variables: LLM_BOOST_API_TOKEN (required, starts with llmb_) and LLM_BOOST_API_URL (optional, defaults to https://api.llmrank.app).
27 Available Tools
Every tool includes MCP annotations (readOnlyHint, destructiveHint) so your agent knows which tools are safe to call without confirmation.
Projects
3 toolslist_projectsList all projects with domains and latest scoresget_projectGet project details including latest crawl scorecreate_projectCreate a new project by providing a domain
Crawling
3 toolsstart_crawlStart a crawl scoring pages across 37 AI-readiness factorsget_crawl_statusCheck crawl progress in real timelist_crawlsGet crawl history with scores and timestamps
Scores & Analysis
5 toolsget_site_scoreOverall AI-readiness score with category breakdowncompare_scoresCompare two crawls to see improvements or regressionsget_score_historyScore trends over time across crawlslist_pagesList crawled pages with scores, sortable by gradeget_page_detailsFull page analysis: per-category scores, issues, fixes
Issues & Fixes
3 toolslist_issuesAll issues grouped by severity and categoryget_fix_recommendationAI-generated fix steps with code examplesgenerate_fixGenerate code snippets to resolve a specific issue
AI Visibility
3 toolscheck_visibilityCheck if your brand appears in AI search results across 6 platformslist_visibility_historyTrack AI search presence over timesuggest_queriesAI-suggested queries to monitor based on your content
Content & Technical
4 toolsanalyze_contentEvaluate content across 37 AI-readiness factorssuggest_meta_tagsGenerate optimized title, description, and OG tagscheck_llms_txtValidate your llms.txt for AI crawler permissionsvalidate_schemaCheck structured data (JSON-LD, Schema.org)
Strategy
5 toolsget_recommendationsPrioritized action plan ranked by effort and impactget_content_gapsTopics competitors cover that you don'tdiscover_keywordsAI-powered keyword discovery with search volumelist_competitorsCompetitors with AI-readiness score comparisoncompare_competitorSide-by-side comparison with a specific competitor
Reports
1 toolgenerate_reportComprehensive Markdown report with scores, issues, and recommendations
Pre-built Prompts
Common workflows packaged as MCP prompts your agent can invoke directly.
site-auditFull AI-readiness audit: scores, critical issues, and prioritized action plan
fix-planGenerate specific code and content fixes for your top issues
competitor-analysisCompare your site against competitors and identify gaps
HTTP Transport & OAuth
For cloud-based agents like ChatGPT, the MCP server is available over Streamable HTTP with OAuth 2.1 authentication (PKCE, RFC 8414, RFC 9728).
MCP Endpoint
https://mcp.llmrank.app/v1/mcpOAuth Discovery
/.well-known/oauth-authorization-serverResource Metadata
/.well-known/oauth-protected-resourceSupports MCP Authorization Spec: Dynamic Client Registration (RFC 7591), PKCE with S256, WWW-Authenticate with resource_metadata, and token refresh.
Frequently Asked Questions
What is the Model Context Protocol (MCP)?
MCP is an open standard created by Anthropic that lets AI coding agents connect to external tools and data sources. It provides a unified way for IDEs like Claude Code, Cursor, VS Code, and ChatGPT to call tools — in this case, the 27 SEO tools from LLM Rank.
Which AI coding agents support MCP?
Claude Code, Cursor, Claude Desktop, VS Code (via Copilot), Windsurf, ChatGPT, and Perplexity all support MCP servers. LLM Rank works with all of them — use stdio transport for local IDEs or the HTTP endpoint for cloud-based agents.
Do I need a paid plan to use the MCP server?
The MCP server works with any LLM Rank plan, including the free tier. Free accounts can crawl up to 10 pages per crawl with 2 crawls per month. Paid plans unlock higher limits and additional features like competitor analysis.
Is the MCP server open source?
Yes, the @llmrank.app/mcp package is MIT licensed and published on npm. The source code is available on GitHub.
How does the HTTP transport work for ChatGPT?
ChatGPT and other cloud-based clients use the Streamable HTTP transport at mcp.llmrank.app/v1/mcp. Authentication uses OAuth 2.1 with PKCE — the client discovers the authorization server via the /.well-known/oauth-authorization-server endpoint and completes the flow automatically.
Start using LLM Rank from your IDE
Create a free account, generate an API token, and add the MCP server to your IDE in under 2 minutes.