M

Microsoft Agent Framework Guru

Battle-tested agent for create, update, refactor, explain. Includes structured workflows, validation checks, and reusable patterns for data ai.

AgentClipticsdata aiv1.0.0MIT
0 views0 copies

Microsoft Agent Framework Guru

An agent that guides development of AI applications using Microsoft Agent Framework for .NET, the unified successor to Semantic Kernel and AutoGen that combines their capabilities with enhanced multi-agent orchestration and tool integration.

When to Use This Agent

Choose Microsoft Agent Framework Guru when:

  • Building AI agents using Microsoft Agent Framework for .NET
  • Migrating from Semantic Kernel or AutoGen to the unified framework
  • Implementing multi-agent orchestration patterns with .NET
  • Integrating Azure OpenAI or other LLM providers through the framework
  • Designing agentic workflows with tool use and structured outputs

Consider alternatives when:

  • Using Python-based agent frameworks like LangChain or CrewAI
  • Building directly with the OpenAI API without a framework
  • Working with Semantic Kernel specifically (if not migrating)

Quick Start

# .claude/agents/microsoft-agent-framework-guru.yml name: Microsoft Agent Framework Guru model: claude-sonnet-4-20250514 tools: - Read - Write - Bash - Glob - Grep prompt: | You are an expert in Microsoft Agent Framework for .NET. Guide development of AI agents using the framework's unified API for LLM integration, tool use, multi-agent orchestration, and structured outputs. Always use the latest framework patterns.

Example invocation:

claude --agent microsoft-agent-framework-guru "Create a multi-agent system with a planner agent that delegates research and coding tasks to specialized agents using Microsoft Agent Framework"

Core Concepts

Framework Architecture

Microsoft Agent Framework (.NET)
β”œβ”€β”€ Agent Definitions (ChatCompletionAgent, OpenAIAssistantAgent)
β”œβ”€β”€ Kernel Integration (plugins, functions, filters)
β”œβ”€β”€ Orchestration (AgentGroupChat, handoffs, termination)
β”œβ”€β”€ Tool Use (function calling, code interpreter)
└── Channels (Azure OpenAI, OpenAI, local models)

Agent Types

Agent TypeUse CaseFeatures
ChatCompletionAgentStandard conversational agentStreaming, tool use, history
OpenAIAssistantAgentOpenAI Assistants API wrapperCode interpreter, file search
AggregatorAgentCoordinate multiple agentsRouting, consensus, sequential
Custom AgentSpecialized behaviorOverride base class

Multi-Agent Pattern

using Microsoft.SemanticKernel; using Microsoft.SemanticKernel.Agents; using Microsoft.SemanticKernel.Agents.Chat; // Define specialized agents var researcher = new ChatCompletionAgent { Name = "Researcher", Instructions = "Research topics thoroughly using provided tools", Kernel = kernel }; var writer = new ChatCompletionAgent { Name = "Writer", Instructions = "Write clear, engaging content based on research", Kernel = kernel }; // Orchestrate with group chat var chat = new AgentGroupChat(researcher, writer) { ExecutionSettings = new() { TerminationStrategy = new MaxIterationTermination(10), SelectionStrategy = new SequentialSelectionStrategy() } }; await foreach (var message in chat.InvokeAsync()) { Console.WriteLine($"{message.AuthorName}: {message.Content}"); }

Configuration

ParameterDescriptionDefault
llm_providerLLM service providerAzure OpenAI
model_deploymentModel deployment namegpt-4o
max_tokensMaximum generation tokens4096
temperatureGeneration temperature0.7
orchestrationMulti-agent orchestration patternSequential
terminationChat termination strategyMaxIteration(10)
tool_callingEnable function/tool callingtrue

Best Practices

  1. Define agent responsibilities narrowly. Each agent should have a clear, focused role described in its instructions. A "researcher" agent that also writes and reviews creates confusion in multi-agent orchestration. Narrow agents produce better results because the LLM can focus on one task, and the orchestrator can route work effectively based on clear role boundaries.

  2. Use kernel plugins for tool integration. Register tools as kernel plugins with clear function descriptions and typed parameters. The framework handles serialization, function calling, and result injection automatically. Well-described functions with XML documentation comments improve the LLM's tool selection accuracy significantly compared to terse descriptions.

  3. Implement proper termination strategies. Multi-agent chats without termination conditions run indefinitely or until token limits are hit. Combine strategies: MaxIteration for safety limits, content-based termination for detecting completion signals, and approval-based termination for human-in-the-loop workflows. Always set a max iteration ceiling even when using other strategies.

  4. Handle LLM failures gracefully with retry policies. Configure HTTP retry policies for transient failures (429 rate limits, 503 service unavailable). Use the built-in retry handlers with exponential backoff. For multi-agent scenarios, isolate failures so one agent's error doesn't crash the entire orchestration. Log failed attempts with enough context to debug without reproducing the full conversation.

  5. Test agents with recorded conversations. Save representative conversation histories and replay them in tests to verify agent behavior. This approach is more reliable than testing with live LLM calls, which produce non-deterministic outputs. Use recorded conversations for regression testing and live calls for integration testing during CI.

Common Issues

Agents enter infinite loops in group chat. This happens when agents keep responding to each other without making progress toward the goal. Implement a MaxIterationTermination strategy as a safety net and add a content-based termination check that detects completion keywords. Design agent instructions to include explicit completion signals like "TASK COMPLETE" that the termination strategy can detect.

Function calling fails or agents ignore available tools. Ensure kernel functions have descriptive names and detailed XML documentation comments. The LLM uses these descriptions to decide when to call functions. Vague names like "DoStuff" or missing descriptions cause the model to skip tools. Test tool descriptions by asking the model to explain what each tool doesβ€”if it can't, neither can the agent.

Token context limits exceeded in long conversations. Multi-agent conversations accumulate tokens quickly as each agent's response becomes input for the next. Implement conversation summarization at regular intervals, using a dedicated summarizer agent or the built-in history reduction. Set per-agent max history lengths and use the framework's chat history management to truncate older messages automatically.

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates