Welcome back to the building blocks for AI in .NET series! In part one, we explored Microsoft Extensions for AI (MEAI) and how it provides a unified interface for working with large language models. In part two, we dove into Microsoft.Extensions.VectorData and how it brings semantic search and RAG patterns to .NET. Today, we’re exploring the third building block: the Microsoft Agent Framework.Up to this point, we’ve been building the foundation. MEAI gave us a universal way to talk to models, and VectorData gave us the ability to store and search knowledge. But what if you want an AI that can do things? Not just answer questions, but take actions, use tools, remember context across conversations, and coordinate with other agents to solve complex problems? That’s where agents come in.What is an AI agent?An AI agent is more than a chatbot. A chatbot receives input, passes it to a model, and returns the output. An agent, on the other hand, has autonomy. It can reason about a task, decide which tools to use, call those tools, evaluate the results, and decide what to do next. It can accomplish all this without forcing you to write explicit step-by-step instructions for every scenario.Think of it this way: if MEAI is like having a conversation with a colleague, an agent is like handing that colleague a to-do list and letting them figure out how to get it done. They might search for information, run calculations, check the weather, look up a database and use whatever tools you’ve made available to them.The Microsoft Agent Framework provides a production-ready SDK for building these intelligent agents in .NET (and other languages like Python, though we’ll focus on C# here). It achieved its 1.0 release in April 2026 and supports everything from simple single-agent scenarios to complex multi-agent workflows with graph-based orchestration.Your first agentLet’s start simple. If you’ve used MEAI and/or read Part 1, the Agent Framework will feel familiar because it builds directly on top of IChatClient. Create a console app, then install the agent framework:dotnet add package Microsoft.Agents.AIHere’s all it takes to create an agent (01_hello_agent):using Azure.AI.OpenAI;using Azure.Identity;using Microsoft.Agents.AI;var endpoint = Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT") ?? throw new InvalidOperationException("AZURE_OPENAI_ENDPOINT is not set.");var deploymentName = Environment.GetEnvironmentVariable("AZURE_OPENAI_DEPLOYMENT_NAME") ?? "gpt-5.4-mini";AIAgent agent = new AzureOpenAIClient( new Uri(endpoint), new DefaultAzureCredential()) .GetChatClient(deploymentName) .AsAIAgent( instructions: "You are good at telling jokes.", name: "Joker");Console.WriteLine(await agent.RunAsync("Tell me a joke about a pirate."));Notice the .AsAIAgent() extension method. Just like .AsIChatClient() bridges a provider’s SDK to the MEAI abstraction, .AsAIAgent() takes that a step further and wraps it in an agent that can manage sessions, tools, and memory. This works with multiple providers, including Azure OpenAI, OpenAI, GitHub Models, Microsoft Foundry, or even local models via Foundry Local or Ollama.The agent also supports streaming out of the box:await foreach (var update in agent.RunStreamingAsync("Tell me a joke about a pirate.")){ Console.Write(update);}Giving your agent toolsA joke-telling agent is fun, but agents become truly powerful when you give them tools. Tools are simply functions that the model can decide to call based on the user’s request. The Agent Framework uses the same AIFunctionFactory from MEAI, so if you’ve already defined tools for your chat client, they work here too.Here’s an agent with a weather tool (02_add_tools):using System.ComponentModel;using Microsoft.Agents.AI;using Microsoft.Extensions.AI;[Description("Get the weather for a given location.")]static string GetWeather( [Description("The location to get the weather for.")] string location) => $"The weather in {location} is cloudy with a high of 15°C.";AIAgent agent = new AzureOpenAIClient( new Uri(endpoint), new DefaultAzureCredential()) .GetChatClient(deploymentName) .AsAIAgent( instructions: "You are a helpful assistant", tools: [AIFunctionFactory.Create(GetWeather)]);Console.WriteLine(await agent.RunAsync("What is the weather like in Amsterdam?"));When the user asks about the weather, the agent doesn’t just guess but recognizes that it has a GetWeather tool available, calls it with the appropriate parameter, and uses the result to formulate its response. You didn’t have to write any “if the user asks about weather, call this function” logic. The model figures it out.The Description attributes are important. They tell the model what the tool does and what each parameter means, which helps it decide when and how to use the tool. Think of them as the tool’s instruction manual for the AI.Multi-turn conversations with sessionsReal conversations don’t happen in a single exchange. Users ask follow-up questions, provide additional context, and expect the agent to remember what was discussed. The Agent Framework handles this with AgentSession (03_multi_turn):AgentSession session = await agent.CreateSessionAsync();Console.WriteLine( await agent.RunAsync("Tell me a joke about a pirate.", session));Console.WriteLine( await agent.RunAsync( "Now add some emojis to the joke and tell it in the voice of a pirate's parrot.", session));The session preserves the conversation history between calls. When the user asks to “add some emojis to the joke,” the agent knows which joke is being referenced because the session maintains that context.Sessions can also be serialized and deserialized, which is essential for production scenarios where your agent runs in a stateless service:// Save the session stateJsonElement sessionState = await agent.SerializeSessionAsync(session);// Later, restore itvar restoredSession = await agent.DeserializeSessionAsync(sessionState);Console.WriteLine( await agent.RunAsync("What were we just talking about?", restoredSession));Teaching your agent to rememberSessions preserve conversation history, but what about longer-term memory? What if you want your agent to remember facts about the user across sessions — their name, preferences, or previous interactions?The Agent Framework provides AIContextProvider, a mechanism for injecting contextual information into the agent’s workflow. Here’s a simplified version of the memory sample (04_memory) that extracts and remembers user information:internal sealed class UserInfoMemory : AIContextProvider{ private readonly ProviderSessionState _sessionState; private readonly IChatClient _chatClient; public UserInfoMemory(IChatClient chatClient) { _sessionState = new ProviderSessionState( _ => new UserInfo(), GetType().Name); _chatClient = chatClient; } protected override async ValueTask StoreAIContextAsync( InvokedContext context, CancellationToken cancellationToken = default) { var userInfo = _sessionState.GetOrInitializeState(context.Session); if (userInfo.UserName is null && context.RequestMessages.Any(x => x.Role == ChatRole.User)) { var result = await _chatClient.GetResponseAsync( context.RequestMessages, new ChatOptions() { Instructions = "Extract the user's name from the message if present." }, cancellationToken: cancellationToken); userInfo.UserName ??= result.Result.UserName; } _sessionState.SaveState(context.Session, userInfo); } protected override ValueTask ProvideAIContextAsync( InvokingContext context, CancellationToken cancellationToken = default) { var userInfo = _sessionState.GetOrInitializeState(context.Session); var instructions = userInfo.UserName is null ? "Ask the user for their name." : $"The user's name is {userInfo.UserName}."; return new ValueTask( new AIContext { Instructions = instructions }); }}The AIContextProvider has two key methods:StoreAIContextAsync runs after each interaction and is the agent’s opportunity to learn from what just happened — in this case, extracting the user’s name from the conversation.ProvideAIContextAsync runs before each interaction and supplies additional context to the agent — here, either telling the agent the user’s name or instructing it to ask for one.Wire it up when creating the agent:AIAgent agent = chatClient.AsAIAgent(new ChatClientAgentOptions(){ ChatOptions = new() { Instructions = "You are a friendly assistant. Always address the user by their name." }, AIContextProviders = [new UserInfoMemory(chatClient.AsIChatClient())]});This pattern is powerful because it separates what the agent remembers from how the agent converses. You can stack multiple context providers — one for user preferences, another for recent interactions, and a third that pulls relevant documents from your VectorData store.Workflows: orchestrating multiple agentsSingle agents are useful, but many real-world problems benefit from breaking the work across multiple specialized agents. The Agent Framework provides a graph-based workflow system where you connect executors (processing units) with edges (data flow paths).Here’s a simple workflow that chains two text processors together (05_first_workflow):using Microsoft.Agents.AI.Workflows;Func uppercaseFunc = s => s.ToUpperInvariant();var uppercase = uppercaseFunc.BindAsExecutor("UppercaseExecutor");var reverse = new ReverseTextExecutor();WorkflowBuilder builder = new(uppercase);builder.AddEdge(uppercase, reverse).WithOutputFrom(reverse);var workflow = builder.Build();await using Run run = await InProcessExecution.RunAsync( workflow, "Hello, World!");foreach (WorkflowEvent evt in run.NewEvents){ if (evt is ExecutorCompletedEvent executorComplete) { Console.WriteLine( $"{executorComplete.ExecutorId}: {executorComplete.Data}"); }}The text processing example keeps things simple, but the real power of workflows shines when you use agents as executors. Here are some of the patterns the framework supports:Sequential workflows — agents process one after another, where each agent’s output feeds the nextConcurrent workflows — fan-out to multiple agents in parallel, then fan-in the resultsConditional routing (“hand-off”) — dynamically route work to different agents based on the output of a previous stepFeedback loops — a writer-critic pattern where one agent produces content and another evaluates it, looping until quality criteria are metSub-workflows — compose workflows hierarchically by embedding one workflow inside anotherA writer-critic exampleOne of the most practical patterns is the writer-critic workflow. Imagine you have one agent that writes marketing copy and another that reviews it for quality:WorkflowBuilder builder = new(writerAgent);builder .AddEdge(writerAgent, criticAgent) .AddEdge(criticAgent, writerAgent, condition: result => !result.IsApproved) .WithOutputFrom(criticAgent, condition: result => result.IsApproved);var workflow = builder.Build();The writer produces a draft, the critic evaluates it, and if it isn’t approved, the draft goes back to the writer for revision. This loop continues until the critic is satisfied. Of course, for safety, you probably want to provide a maximum iteration count.Human-in-the-loopAI doesn’t replace humans, and often may require human input. Think of agents as specialized workers that are directed by humans through code. The Agent Framework supports tool approval workflows where the agent proposes a tool call and waits for human approval before executing it. This is critical for production scenarios involving sensitive operations like database writes, financial transactions, or sending communications.The approval mechanism is built on FunctionApprovalRequestContent and FunctionApprovalResponseContent, content types that are part of MEAI’s content model that we introduced in Part 1. When the agent wants to call a tool that requires approval, it yields a request and waits. Your application code can present this to the user, and the response determines whether the tool call proceeds.Bringing it all togetherThe beauty of the building blocks approach is that each piece composes naturally with the others. Here’s how they fit:MEAI (IChatClient) provides the foundation — the universal interface for talking to any model.VectorData enables RAG patterns — your agents can search through your organization’s knowledge base using semantic search to ground their responses in your data.Agent Framework orchestrates everything — agents use IChatClient under the hood, can incorporate vector search through context providers, and coordinate through workflows.For example, you could build an AIContextProvider that searches your VectorData store before each agent invocation, providing relevant documents as additional context — exactly like the RAG pattern from Part 2, but now running automatically as part of every agent interaction.SummaryThe Microsoft Agent Framework transforms the primitives from Parts 1 and 2 into autonomous, tool-using, memory-aware agents that can work alone or together in sophisticated workflows. We covered:Creating agents with AsAIAgent() and running them with RunAsync()Equipping agents with tools using AIFunctionFactory and Description attributesManaging conversations across turns with AgentSessionBuilding memory with AIContextProvider for persistent, cross-session knowledgeOrchestrating workflows with executors, edges, and patterns like writer-critic loopsHuman-in-the-loop approval for sensitive operationsIn the next and final post, we’ll explore the Model Context Protocol (MCP) and how it provides a standardized way for agents to discover and use external tools and resources — making your agents interoperable with the broader AI ecosystem.Until then, here are some resources to help you get started:Learn by codeAgent Framework repositoryAgent Framework samplesLearn by following tutorialsAgent Framework documentationQuick start guideMigration from Semantic KernelLearn by watching videosAgent Framework introduction (30 min)DevUI in actionHappy coding!The post Microsoft Agent Framework – Building Blocks for AI Part 3 appeared first on .NET Blog.