A

Architect Semantic Kernel Dotnet

Battle-tested agent for create, update, refactor, explain. Includes structured workflows, validation checks, and reusable patterns for data ai.

AgentClipticsdata aiv1.0.0MIT
0 views0 copies

Architect Semantic Kernel .NET

An agent for building AI applications and agents using Semantic Kernel for .NET, covering plugin development, planner orchestration, memory integration, and prompt template management within the Microsoft AI ecosystem.

When to Use This Agent

Choose Semantic Kernel .NET when:

  • Building AI agents using Semantic Kernel's .NET SDK
  • Creating kernel plugins for tool integration in AI workflows
  • Implementing planners for multi-step AI task orchestration
  • Integrating vector memory stores with semantic search
  • Developing prompt templates with Semantic Kernel's templating engine

Consider alternatives when:

  • Using Microsoft Agent Framework (the newer unified framework)
  • Building with Python-based Semantic Kernel (use Python-specific guidance)
  • Working with non-Microsoft AI frameworks like LangChain

Quick Start

# .claude/agents/architect-semantic-kernel-dotnet.yml name: Semantic Kernel .NET model: claude-sonnet-4-20250514 tools: - Read - Write - Bash - Glob - Grep prompt: | You are a Semantic Kernel .NET expert. Build AI applications using the Semantic Kernel SDK. Always reference the latest Microsoft documentation. Create plugins with proper function descriptions for accurate AI tool selection.

Example invocation:

claude --agent architect-semantic-kernel-dotnet "Create a Semantic Kernel plugin that retrieves customer data from our API, with functions for lookup by ID, search by name, and recent orders, all with proper descriptions for AI function calling"

Core Concepts

Semantic Kernel Architecture

Application
    ↓
Kernel (orchestrator)
β”œβ”€β”€ AI Services (OpenAI, Azure OpenAI, Hugging Face)
β”œβ”€β”€ Plugins (native functions + prompt functions)
β”œβ”€β”€ Memory (vector stores for RAG)
β”œβ”€β”€ Planners (multi-step orchestration)
└── Filters (pre/post execution hooks)

Plugin Development

using Microsoft.SemanticKernel; using System.ComponentModel; public class CustomerPlugin { [KernelFunction("get_customer")] [Description("Retrieves customer details by their unique ID")] public async Task<Customer> GetCustomerAsync( [Description("The unique customer identifier")] string customerId) { // Implementation return await _customerService.GetByIdAsync(customerId); } [KernelFunction("search_customers")] [Description("Searches customers by name, returns top matches")] public async Task<List<Customer>> SearchAsync( [Description("Name to search for (partial match)")] string name, [Description("Max results to return")] int limit = 10) { return await _customerService.SearchAsync(name, limit); } }

Kernel Configuration

var builder = Kernel.CreateBuilder(); // Add AI service builder.AddAzureOpenAIChatCompletion( deploymentName: "gpt-4o", endpoint: config["AzureOpenAI:Endpoint"], apiKey: config["AzureOpenAI:ApiKey"] ); // Add plugins builder.Plugins.AddFromType<CustomerPlugin>(); builder.Plugins.AddFromType<OrderPlugin>(); // Add memory builder.AddAzureAISearchVectorStore(); var kernel = builder.Build();

Configuration

ParameterDescriptionDefault
ai_serviceLLM providerAzure OpenAI
model_deploymentModel deployment namegpt-4o
vector_storeVector memory storeAzure AI Search
plannerOrchestration strategyFunction Calling
auto_invokeAuto-invoke kernel functionstrue
max_auto_invokeMaximum auto-invoke iterations5
prompt_template_formatTemplate syntaxHandlebars

Best Practices

  1. Write detailed descriptions on every kernel function and parameter. The AI model selects which function to call based on descriptions, not code. A function named GetData with no description will be called randomly. A function described as "Retrieves customer order history for the last N days, sorted by date descending" will be called precisely when needed. Invest as much thought in descriptions as in implementation.

  2. Use dependency injection to register plugins and services. Register plugins through the DI container rather than creating them inline. This enables proper lifecycle management, constructor injection of services, and testability. Plugins registered through DI can receive database connections, HTTP clients, and configuration through standard .NET patterns.

  3. Implement filters for cross-cutting concerns. Semantic Kernel filters run before and after function execution, similar to middleware. Use them for logging, authentication checks, input validation, and output sanitization. A pre-execution filter that logs every AI function call with parameters is invaluable for debugging and auditing. Post-execution filters can validate outputs before returning them to the model.

  4. Limit auto-invocation depth to prevent runaway costs. When auto-invoke is enabled, the kernel automatically calls functions the model requests. Without limits, a confused model can loop indefinitely, racking up API costs. Set MaxAutoInvokeAttempts to a reasonable number (3-5 for most scenarios). Monitor auto-invoke depth in production and alert when it consistently hits the limit.

  5. Use prompt templates for reusable AI interactions. Store prompts as template files rather than inline strings. Semantic Kernel supports Handlebars and Liquid template syntax with variable substitution, conditional sections, and helper functions. Template files can be versioned, tested, and updated without code changes. Organize templates in a dedicated folder structure by domain.

Common Issues

AI model doesn't call the right plugin function. This almost always means the function description doesn't clearly communicate when to use it. Improve descriptions to distinguish similar functions: "Search by customer name for fuzzy matching" vs "Get by exact customer ID for precise lookup." Test by asking the model to explain when it would use each functionβ€”if it can't distinguish them, neither can the AI during runtime.

Memory/vector search returns irrelevant results. Vector similarity search depends heavily on embedding quality and chunk size. Ensure you're using the same embedding model for indexing and querying. Experiment with chunk sizes: too large and relevant details get diluted, too small and context is lost. Add metadata filters (date range, category) to narrow results before vector similarity ranking.

Kernel execution hits token limits with many plugins. Each registered plugin's functions and descriptions consume tokens in the model context. With 20 plugins and 100 functions, the function descriptions alone may use thousands of tokens. Only register plugins relevant to the current conversation context. Use dynamic plugin loading based on user intent, or split functionality into focused kernel instances with specific plugin sets.

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates