Q

Qdrant Vector Search MCP Server

Implement semantic search and RAG memory layers using Qdrant vector database through MCP. Create collections, index embeddings, run similarity queries, and build retrieval-augmented generation pipelines.

MCPCommunitydatabasev1.0.0Apache-2.0
0 views0 copies

MCP Server Configuration

Add to .claude/settings.json:

{ "mcpServers": { "qdrant": { "command": "npx", "args": ["-y", "@qdrant/mcp-server-qdrant"], "env": { "QDRANT_URL": "http://localhost:6333", "QDRANT_API_KEY": "${QDRANT_API_KEY}", "COLLECTION_NAME": "default", "EMBEDDING_MODEL": "text-embedding-3-small" } } } }

Available Tools

ToolDescription
storeStore a text snippet with metadata as a vector point in the collection
findSemantic search -- find the most similar stored items to a query
create-collectionCreate a new collection with specified vector dimensions
delete-collectionDelete an existing collection
list-collectionsList all available collections

Use Cases

1. Index code documentation and function signatures
2. Query: "function that handles user authentication"
3. Returns: most semantically similar code snippets, even if they don't contain the exact words

RAG Memory Layer

1. Store conversation history, project documentation, and decisions
2. On each new query, retrieve relevant context from the vector store
3. Include retrieved context in the prompt for more informed responses

Knowledge Base

1. Index internal documentation, runbooks, and ADRs
2. Search with natural language questions
3. Get relevant documentation sections ranked by similarity

Collection Configuration

{ "collection_name": "project-docs", "vectors": { "size": 1536, "distance": "Cosine" }, "optimizers_config": { "default_segment_number": 2, "indexing_threshold": 20000 }, "replication_factor": 1 }

Distance Metrics

MetricBest ForRange
CosineText similarity (most common)-1 to 1
EuclidWhen magnitude matters0 to inf
DotPre-normalized vectors, high performance-inf to inf

Embedding Model Dimensions

ModelDimensionsProvider
text-embedding-3-small1536OpenAI
text-embedding-3-large3072OpenAI
voyage-code-31024Voyage AI
nomic-embed-text768Nomic (open source)
bge-large-en-v1.51024BAAI (open source)

Indexing Workflow

# Example: Index project documentation import os from qdrant_client import QdrantClient from qdrant_client.models import PointStruct, VectorParams, Distance client = QdrantClient(url="http://localhost:6333") # Create collection client.create_collection( collection_name="project-docs", vectors_config=VectorParams(size=1536, distance=Distance.COSINE), ) # Index documents points = [] for i, doc in enumerate(documents): embedding = get_embedding(doc["text"]) # Your embedding function points.append(PointStruct( id=i, vector=embedding, payload={ "text": doc["text"], "source": doc["file_path"], "type": doc["type"], # "code", "docs", "adr" } )) client.upsert(collection_name="project-docs", points=points)

Setup

  1. Start Qdrant locally with Docker:
    docker run -d --name qdrant \ -p 6333:6333 -p 6334:6334 \ -v qdrant_storage:/qdrant/storage \ qdrant/qdrant:latest
  2. Or use Qdrant Cloud: https://cloud.qdrant.io
  3. Set QDRANT_API_KEY environment variable (required for Qdrant Cloud)
  4. Add the MCP configuration to .claude/settings.json

Security Notes

  • For local development, Qdrant runs without authentication by default
  • Always use API key authentication in production (Qdrant Cloud requires it)
  • The MCP server only connects to the configured Qdrant instance
  • Be mindful of what you index -- vector stores can inadvertently store sensitive content
  • Use collection-level access control in multi-tenant deployments
Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates