Architect Semantic Kernel Dotnet
Battle-tested agent for create, update, refactor, explain. Includes structured workflows, validation checks, and reusable patterns for data ai.
Architect Semantic Kernel .NET
An agent for building AI applications and agents using Semantic Kernel for .NET, covering plugin development, planner orchestration, memory integration, and prompt template management within the Microsoft AI ecosystem.
When to Use This Agent
Choose Semantic Kernel .NET when:
- Building AI agents using Semantic Kernel's .NET SDK
- Creating kernel plugins for tool integration in AI workflows
- Implementing planners for multi-step AI task orchestration
- Integrating vector memory stores with semantic search
- Developing prompt templates with Semantic Kernel's templating engine
Consider alternatives when:
- Using Microsoft Agent Framework (the newer unified framework)
- Building with Python-based Semantic Kernel (use Python-specific guidance)
- Working with non-Microsoft AI frameworks like LangChain
Quick Start
# .claude/agents/architect-semantic-kernel-dotnet.yml name: Semantic Kernel .NET model: claude-sonnet-4-20250514 tools: - Read - Write - Bash - Glob - Grep prompt: | You are a Semantic Kernel .NET expert. Build AI applications using the Semantic Kernel SDK. Always reference the latest Microsoft documentation. Create plugins with proper function descriptions for accurate AI tool selection.
Example invocation:
claude --agent architect-semantic-kernel-dotnet "Create a Semantic Kernel plugin that retrieves customer data from our API, with functions for lookup by ID, search by name, and recent orders, all with proper descriptions for AI function calling"
Core Concepts
Semantic Kernel Architecture
Application
β
Kernel (orchestrator)
βββ AI Services (OpenAI, Azure OpenAI, Hugging Face)
βββ Plugins (native functions + prompt functions)
βββ Memory (vector stores for RAG)
βββ Planners (multi-step orchestration)
βββ Filters (pre/post execution hooks)
Plugin Development
using Microsoft.SemanticKernel; using System.ComponentModel; public class CustomerPlugin { [KernelFunction("get_customer")] [Description("Retrieves customer details by their unique ID")] public async Task<Customer> GetCustomerAsync( [Description("The unique customer identifier")] string customerId) { // Implementation return await _customerService.GetByIdAsync(customerId); } [KernelFunction("search_customers")] [Description("Searches customers by name, returns top matches")] public async Task<List<Customer>> SearchAsync( [Description("Name to search for (partial match)")] string name, [Description("Max results to return")] int limit = 10) { return await _customerService.SearchAsync(name, limit); } }
Kernel Configuration
var builder = Kernel.CreateBuilder(); // Add AI service builder.AddAzureOpenAIChatCompletion( deploymentName: "gpt-4o", endpoint: config["AzureOpenAI:Endpoint"], apiKey: config["AzureOpenAI:ApiKey"] ); // Add plugins builder.Plugins.AddFromType<CustomerPlugin>(); builder.Plugins.AddFromType<OrderPlugin>(); // Add memory builder.AddAzureAISearchVectorStore(); var kernel = builder.Build();
Configuration
| Parameter | Description | Default |
|---|---|---|
ai_service | LLM provider | Azure OpenAI |
model_deployment | Model deployment name | gpt-4o |
vector_store | Vector memory store | Azure AI Search |
planner | Orchestration strategy | Function Calling |
auto_invoke | Auto-invoke kernel functions | true |
max_auto_invoke | Maximum auto-invoke iterations | 5 |
prompt_template_format | Template syntax | Handlebars |
Best Practices
-
Write detailed descriptions on every kernel function and parameter. The AI model selects which function to call based on descriptions, not code. A function named
GetDatawith no description will be called randomly. A function described as "Retrieves customer order history for the last N days, sorted by date descending" will be called precisely when needed. Invest as much thought in descriptions as in implementation. -
Use dependency injection to register plugins and services. Register plugins through the DI container rather than creating them inline. This enables proper lifecycle management, constructor injection of services, and testability. Plugins registered through DI can receive database connections, HTTP clients, and configuration through standard .NET patterns.
-
Implement filters for cross-cutting concerns. Semantic Kernel filters run before and after function execution, similar to middleware. Use them for logging, authentication checks, input validation, and output sanitization. A pre-execution filter that logs every AI function call with parameters is invaluable for debugging and auditing. Post-execution filters can validate outputs before returning them to the model.
-
Limit auto-invocation depth to prevent runaway costs. When auto-invoke is enabled, the kernel automatically calls functions the model requests. Without limits, a confused model can loop indefinitely, racking up API costs. Set
MaxAutoInvokeAttemptsto a reasonable number (3-5 for most scenarios). Monitor auto-invoke depth in production and alert when it consistently hits the limit. -
Use prompt templates for reusable AI interactions. Store prompts as template files rather than inline strings. Semantic Kernel supports Handlebars and Liquid template syntax with variable substitution, conditional sections, and helper functions. Template files can be versioned, tested, and updated without code changes. Organize templates in a dedicated folder structure by domain.
Common Issues
AI model doesn't call the right plugin function. This almost always means the function description doesn't clearly communicate when to use it. Improve descriptions to distinguish similar functions: "Search by customer name for fuzzy matching" vs "Get by exact customer ID for precise lookup." Test by asking the model to explain when it would use each functionβif it can't distinguish them, neither can the AI during runtime.
Memory/vector search returns irrelevant results. Vector similarity search depends heavily on embedding quality and chunk size. Ensure you're using the same embedding model for indexing and querying. Experiment with chunk sizes: too large and relevant details get diluted, too small and context is lost. Add metadata filters (date range, category) to narrow results before vector similarity ranking.
Kernel execution hits token limits with many plugins. Each registered plugin's functions and descriptions consume tokens in the model context. With 20 plugins and 100 functions, the function descriptions alone may use thousands of tokens. Only register plugins relevant to the current conversation context. Use dynamic plugin loading based on user intent, or split functionality into focused kernel instances with specific plugin sets.
Reviews
No reviews yet. Be the first to review this template!
Similar Templates
API Endpoint Builder
Agent that scaffolds complete REST API endpoints with controller, service, route, types, and tests. Supports Express, Fastify, and NestJS.
Documentation Auto-Generator
Agent that reads your codebase and generates comprehensive documentation including API docs, architecture guides, and setup instructions.
Ai Ethics Advisor Partner
All-in-one agent covering ethics, responsible, development, specialist. Includes structured workflows, validation checks, and reusable patterns for ai specialists.