Microsoft Agent Framework Guru
Battle-tested agent for create, update, refactor, explain. Includes structured workflows, validation checks, and reusable patterns for data ai.
Microsoft Agent Framework Guru
An agent that guides development of AI applications using Microsoft Agent Framework for .NET, the unified successor to Semantic Kernel and AutoGen that combines their capabilities with enhanced multi-agent orchestration and tool integration.
When to Use This Agent
Choose Microsoft Agent Framework Guru when:
- Building AI agents using Microsoft Agent Framework for .NET
- Migrating from Semantic Kernel or AutoGen to the unified framework
- Implementing multi-agent orchestration patterns with .NET
- Integrating Azure OpenAI or other LLM providers through the framework
- Designing agentic workflows with tool use and structured outputs
Consider alternatives when:
- Using Python-based agent frameworks like LangChain or CrewAI
- Building directly with the OpenAI API without a framework
- Working with Semantic Kernel specifically (if not migrating)
Quick Start
# .claude/agents/microsoft-agent-framework-guru.yml name: Microsoft Agent Framework Guru model: claude-sonnet-4-20250514 tools: - Read - Write - Bash - Glob - Grep prompt: | You are an expert in Microsoft Agent Framework for .NET. Guide development of AI agents using the framework's unified API for LLM integration, tool use, multi-agent orchestration, and structured outputs. Always use the latest framework patterns.
Example invocation:
claude --agent microsoft-agent-framework-guru "Create a multi-agent system with a planner agent that delegates research and coding tasks to specialized agents using Microsoft Agent Framework"
Core Concepts
Framework Architecture
Microsoft Agent Framework (.NET)
βββ Agent Definitions (ChatCompletionAgent, OpenAIAssistantAgent)
βββ Kernel Integration (plugins, functions, filters)
βββ Orchestration (AgentGroupChat, handoffs, termination)
βββ Tool Use (function calling, code interpreter)
βββ Channels (Azure OpenAI, OpenAI, local models)
Agent Types
| Agent Type | Use Case | Features |
|---|---|---|
| ChatCompletionAgent | Standard conversational agent | Streaming, tool use, history |
| OpenAIAssistantAgent | OpenAI Assistants API wrapper | Code interpreter, file search |
| AggregatorAgent | Coordinate multiple agents | Routing, consensus, sequential |
| Custom Agent | Specialized behavior | Override base class |
Multi-Agent Pattern
using Microsoft.SemanticKernel; using Microsoft.SemanticKernel.Agents; using Microsoft.SemanticKernel.Agents.Chat; // Define specialized agents var researcher = new ChatCompletionAgent { Name = "Researcher", Instructions = "Research topics thoroughly using provided tools", Kernel = kernel }; var writer = new ChatCompletionAgent { Name = "Writer", Instructions = "Write clear, engaging content based on research", Kernel = kernel }; // Orchestrate with group chat var chat = new AgentGroupChat(researcher, writer) { ExecutionSettings = new() { TerminationStrategy = new MaxIterationTermination(10), SelectionStrategy = new SequentialSelectionStrategy() } }; await foreach (var message in chat.InvokeAsync()) { Console.WriteLine($"{message.AuthorName}: {message.Content}"); }
Configuration
| Parameter | Description | Default |
|---|---|---|
llm_provider | LLM service provider | Azure OpenAI |
model_deployment | Model deployment name | gpt-4o |
max_tokens | Maximum generation tokens | 4096 |
temperature | Generation temperature | 0.7 |
orchestration | Multi-agent orchestration pattern | Sequential |
termination | Chat termination strategy | MaxIteration(10) |
tool_calling | Enable function/tool calling | true |
Best Practices
-
Define agent responsibilities narrowly. Each agent should have a clear, focused role described in its instructions. A "researcher" agent that also writes and reviews creates confusion in multi-agent orchestration. Narrow agents produce better results because the LLM can focus on one task, and the orchestrator can route work effectively based on clear role boundaries.
-
Use kernel plugins for tool integration. Register tools as kernel plugins with clear function descriptions and typed parameters. The framework handles serialization, function calling, and result injection automatically. Well-described functions with XML documentation comments improve the LLM's tool selection accuracy significantly compared to terse descriptions.
-
Implement proper termination strategies. Multi-agent chats without termination conditions run indefinitely or until token limits are hit. Combine strategies: MaxIteration for safety limits, content-based termination for detecting completion signals, and approval-based termination for human-in-the-loop workflows. Always set a max iteration ceiling even when using other strategies.
-
Handle LLM failures gracefully with retry policies. Configure HTTP retry policies for transient failures (429 rate limits, 503 service unavailable). Use the built-in retry handlers with exponential backoff. For multi-agent scenarios, isolate failures so one agent's error doesn't crash the entire orchestration. Log failed attempts with enough context to debug without reproducing the full conversation.
-
Test agents with recorded conversations. Save representative conversation histories and replay them in tests to verify agent behavior. This approach is more reliable than testing with live LLM calls, which produce non-deterministic outputs. Use recorded conversations for regression testing and live calls for integration testing during CI.
Common Issues
Agents enter infinite loops in group chat. This happens when agents keep responding to each other without making progress toward the goal. Implement a MaxIterationTermination strategy as a safety net and add a content-based termination check that detects completion keywords. Design agent instructions to include explicit completion signals like "TASK COMPLETE" that the termination strategy can detect.
Function calling fails or agents ignore available tools. Ensure kernel functions have descriptive names and detailed XML documentation comments. The LLM uses these descriptions to decide when to call functions. Vague names like "DoStuff" or missing descriptions cause the model to skip tools. Test tool descriptions by asking the model to explain what each tool doesβif it can't, neither can the agent.
Token context limits exceeded in long conversations. Multi-agent conversations accumulate tokens quickly as each agent's response becomes input for the next. Implement conversation summarization at regular intervals, using a dedicated summarizer agent or the built-in history reduction. Set per-agent max history lengths and use the framework's chat history management to truncate older messages automatically.
Reviews
No reviews yet. Be the first to review this template!
Similar Templates
API Endpoint Builder
Agent that scaffolds complete REST API endpoints with controller, service, route, types, and tests. Supports Express, Fastify, and NestJS.
Documentation Auto-Generator
Agent that reads your codebase and generates comprehensive documentation including API docs, architecture guides, and setup instructions.
Ai Ethics Advisor Partner
All-in-one agent covering ethics, responsible, development, specialist. Includes structured workflows, validation checks, and reusable patterns for ai specialists.