Building AI features into .NET applications often means stitching together models, vector databases, ingestion pipelines, and agent frameworks from different ecosystems. Each one has its own patterns, its own client libraries, and its own breaking changes when the next version ships. We’ve been working on a set of composable, extensible building blocks that give you stable abstractions across all of these concerns.We’re excited to walk you through how we used them together. For a session at MVP Summit, we built an interactive conference assistant called ConferencePulse. It runs live polls, answers audience questions in real time, generates insights from engagement data, and summarizes the session when it wraps up. We built the app using the exact technologies we were there to present: Microsoft.Extensions.AI, Microsoft.Extensions.DataIngestion, Microsoft.Extensions.VectorData, Model Context Protocol (MCP), and Microsoft Agent Framework.This post walks through the app and shows how each building block fits.What we builtConferencePulse is a Blazor Server app for live conference sessions. Attendees scan a QR code, join the session, and interact with the presenter through polls and Q&A. On the backend, AI powers several features:Live polls that the AI generates based on session content. Attendees vote and results appear in real time.Audience Q&A where AI answers questions using a RAG pipeline that pulls from the session knowledge base, Microsoft Learn docs, and GitHub wiki content.Auto-generated insights that surface patterns in poll results and audience questions as they come in.Session summary that runs when the presenter ends the session. Multiple AI agents analyze polls, questions, and insights concurrently, then merge their findings.We wanted an interactive session, not a slide deck. We wanted polls and audience insights. And we wanted to automate the preparation: point the app at a GitHub repo, and it downloads the markdown, processes it through a pipeline, and builds a searchable knowledge base. Polls, talking points, and Q&A answers are all grounded in that content.The app runs on .NET 10, Blazor Server, and Aspire. Five projects cover the stack:src/├── ConferenceAssistant.Web/ ← Blazor Server (UI + orchestration)├── ConferenceAssistant.Core/ ← Models, interfaces, session state├── ConferenceAssistant.Ingestion/ ← Data ingestion pipeline + vector search├── ConferenceAssistant.Agents/ ← AI agents, workflows, tools├── ConferenceAssistant.Mcp/ ← MCP server tools + MCP client└── ConferenceAssistant.AppHost/ ← .NET Aspire (Qdrant, PostgreSQL, Azure OpenAI)Now let’s walk through the building blocks.Microsoft.Extensions.AI: one interface, any providerMicrosoft.Extensions.AI gives you IChatClient, a unified abstraction that works with OpenAI, Azure OpenAI, Ollama, Foundry Local, and other providers. Every AI call in ConferencePulse goes through a single middleware pipeline.var openaiBuilder = builder.AddAzureOpenAIClient("openai");openaiBuilder.AddChatClient("chat") .UseFunctionInvocation() .UseOpenTelemetry() .UseLogging();openaiBuilder.AddEmbeddingGenerator("embedding");That’s it. Six lines. If you’ve worked with ASP.NET Core middleware, this pattern will feel familiar. Each .Use*() call wraps the inner client with additional behavior. UseFunctionInvocation() handles tool-call loops. UseOpenTelemetry() traces every call. UseLogging() captures request/response pairs.Want to swap Azure OpenAI for Ollama? Change the inner client. The middleware stays the same.This matters because IChatClient shows up everywhere in the app. Poll generation, Q&A, insights, ingestion enrichment, and multi-agent workflows all share this pipeline. You register it once and use it throughout.DataIngestion + VectorData: the knowledge layerAI models need context to give useful answers. Microsoft.Extensions.DataIngestion provides a pipeline for processing documents into searchable chunks. Microsoft.Extensions.VectorData provides a provider-agnostic abstraction over vector stores.When ConferencePulse imports content from a GitHub repo, it runs the files through an ingestion pipeline:IngestionDocumentReader reader = new MarkdownReader();var tokenizer = TiktokenTokenizer.CreateForModel("gpt-4o");var chunkerOptions = new IngestionChunkerOptions(tokenizer){ MaxTokensPerChunk = 500, OverlapTokens = 50};IngestionChunker chunker = new HeaderChunker(chunkerOptions);var enricherOptions = new EnricherOptions(_chatClient) { LoggerFactory = _loggerFactory };using var writer = new VectorStoreWriter( _searchService.VectorStore, dimensionCount: 1536, new VectorStoreWriterOptions { CollectionName = "conference_knowledge", IncrementalIngestion = true });using IngestionPipeline pipeline = new( reader, chunker, writer, new IngestionPipelineOptions(), _loggerFactory){ ChunkProcessors = { new SummaryEnricher(enricherOptions), new KeywordEnricher(enricherOptions, ReadOnlySpan.Empty), frontMatterProcessor }};The pipeline reads markdown, chunks it by headers, enriches each chunk with AI-generated summaries and keywords, then embeds and stores the results in Qdrant. Each step is a pluggable component. You can swap MarkdownReader for a PDF reader, HeaderChunker for a fixed-size chunker, or Qdrant for Azure AI Search. The pipeline composition stays the same.Notice that SummaryEnricher and KeywordEnricher both take EnricherOptions(_chatClient). They use the same IChatClient from the previous section. AI enriching its own context. The summary enricher generates a concise description of each chunk, and the keyword enricher extracts searchable terms. Both improve retrieval quality later.On the query side, Microsoft.Extensions.VectorData gives you VectorStoreCollection for semantic search over any backend:var results = collection.SearchAsync(query, topK);await foreach (var result in results){ var content = result.Record["content"] as string; // Use the content...}Similar to how you can swap database providers in EF Core, you can swap vector store providers here. Qdrant today, Azure AI Search tomorrow. Same API.ConferencePulse also ingests data in real time as the session progresses. Poll responses, audience questions, Q&A pairs, and AI-generated insights all go into the knowledge base:public async Task IngestResponseAsync( string pollId, string topicId, string question, Dictionary results, List? otherResponses = null){ var sb = new StringBuilder(); sb.AppendLine($"Poll: {question}"); sb.AppendLine("Results:"); var total = results.Values.Sum(); foreach (var (option, count) in results) { var percentage = total > 0 ? (count * 100.0 / total).ToString("F1") : "0"; sb.AppendLine($" - {option}: {count} votes ({percentage}%)"); } await _searchService.UpsertAsync(sb.ToString(), source: "response", documentId: $"response-{pollId}"); return 1;}By the end of a session, the knowledge base contains the original imported content, every poll result, every audience question, and every AI-generated insight.IChatClient with tools: choosing the right level of complexityOne of the design principles we followed: use the simplest approach that gets the job done. IChatClient with tools handles a lot of scenarios before you need a dedicated agent framework. At the same time, when orchestration gets complex, a framework earns its place. The key is choosing the right tool.ConferencePulse has three AI-powered features at different levels of complexity. All three use the same IChatClient.Insight generation: a single callWhen a poll closes, ConferencePulse generates an insight. The implementation is a single GetResponseAsync call:var response = await chatClient.GetResponseAsync([ new(ChatRole.System, "You are a conference analytics assistant generating real-time insights from audience data."), new(ChatRole.User, prompt) // prompt contains the poll results]);var content = response.Text?.Trim();if (!string.IsNullOrWhiteSpace(content)){ ctx.AddInsight(new Insight { TopicId = poll.TopicId, PollId = pollId, Content = content, Type = InsightType.PollAnalysis });}No tools, no framework. A prompt with poll results as context, and the middleware pipeline handles telemetry and logging.Poll generation: IChatClient with toolsGenerating a poll needs more context. The AI checks the current topic, looks at what’s been covered, and creates something relevant. That means tools:public class PollGenerationWorkflow(IChatClient chatClient, AgentTools tools){ public async Task ExecuteAsync(string topicId) { var options = new ChatOptions { Tools = [tools.GetCurrentTopic, tools.SearchKnowledge, tools.GetAudienceQuestions, tools.GetAllPollResults, tools.GetAllInsights, tools.CreatePoll] }; var messages = new List { new(ChatRole.System, AgentDefinitions.SurveyArchitectInstructions), new(ChatRole.User, $"Generate an engaging poll for topic: {topicId}...") }; var response = await chatClient.GetResponseAsync(messages, options); return response.Text ?? "Unable to generate poll."; }}Each tool is a strongly-typed AITool property created from a C# method:public class AgentTools{ public AITool SearchKnowledge { get; } public AITool GetCurrentTopic { get; } public AITool CreatePoll { get; } // ... public AgentTools(IPollService pollService, ISemanticSearchService searchService, ...) { SearchKnowledge = AIFunctionFactory.Create(SearchKnowledgeCore, new AIFunctionFactoryOptions { Name = nameof(SearchKnowledge), Description = "Search the session knowledge base for content related to the query" }); // ... }}The model decides it needs context, calls GetCurrentTopic and SearchKnowledge, then generates a poll and calls CreatePoll to save it. The UseFunctionInvocation() middleware handles the tool loop automatically.Q&A answering: RAG across multiple sourcesThe Q&A service brings multiple building blocks together. When an audience member asks a question, the app searches the local knowledge base, queries Microsoft Learn docs via MCP, and asks DeepWiki about relevant GitHub repos via MCP. Then it synthesizes an answer:// 1. Search local knowledge basevar searchResults = await searchService.SearchAsync(questionText, topK: 5);var localContext = string.Join("\n\n---\n\n", searchResults.Select(r => r.Content).Where(c => !string.IsNullOrWhiteSpace(c)));// 2. Search Microsoft Learn for documentation context (via MCP)var docsContext = await mcpClient.SearchDocsAsync(questionText);// 3. Ask DeepWiki about relevant .NET repos (via MCP)var deepWikiContext = await mcpClient.AskDeepWikiAsync("dotnet/extensions", questionText);VectorData for local search, MCP for external context, IChatClient for generation.Now let’s look at how MCP works.MCP: consuming and providing contextModel Context Protocol is a standard for AI applications to discover and use external tools and context. Similar to how HTTP lets any client talk to any server, MCP lets any AI app connect to any context provider using the same protocol.ConferencePulse uses MCP in both directions.As a consumerThe McpContentClient connects to two MCP servers at startup: Microsoft Learn and DeepWiki.public async Task InitializeAsync(CancellationToken ct = default){ var learnTransport = new HttpClientTransport(new HttpClientTransportOptions { Endpoint = new Uri("https://learn.microsoft.com/api/mcp"), TransportMode = HttpTransportMode.StreamableHttp }, loggerFactory); _learnClient = await McpClient.CreateAsync(learnTransport, null, loggerFactory, ct); var deepWikiTransport = new HttpClientTransport(new HttpClientTransportOptions { Endpoint = new Uri("https://mcp.deepwiki.com/mcp"), TransportMode = HttpTransportMode.StreamableHttp }, loggerFactory); _deepWikiClient = await McpClient.CreateAsync(deepWikiTransport, null, loggerFactory, ct);}Once connected, calling a tool on any MCP server uses the same pattern:var result = await _learnClient.CallToolAsync( "microsoft_docs_search", new Dictionary { ["query"] = query }, cancellationToken: ct);Any server that speaks MCP works with this client code.As a providerConferencePulse is also an MCP server. Any MCP-compatible client (GitHub Copilot, Claude, a custom tool) can connect and query session data.[McpServerToolType]public class ConferenceTools{ [McpServerTool(Name = "get_session_status", ReadOnly = true), Description("Returns the current conference session status.")] public static string GetSessionStatus(ISessionService sessionService) { var session = sessionService.CurrentSession; if (session is null) return "No active conference session."; // ... build status string } [McpServerTool(Name = "search_session_knowledge", ReadOnly = true), Description("Searches the session knowledge base for relevant content.")] public static async Task SearchSessionKnowledge( ISemanticSearchService searchService, [Description("The search query.")] string query, [Description("Max results. Defaults to 5.")] int maxResults = 5) { var results = await searchService.SearchAsync(query, maxResults); // ... format results }}Registration takes a few lines in Program.cs:builder.Services .AddMcpServer(options => { options.ServerInfo = new() { Name = "ConferencePulse", Version = "1.0.0" }; }) .WithToolsFromAssembly(typeof(ConferenceTools).Assembly) .WithHttpTransport();app.MapMcp("/mcp");The app consumes external knowledge to answer questions and provides its own data for external tools. Same protocol in both directions.Microsoft Agent Framework: multi-agent orchestrationFor most of ConferencePulse’s features, IChatClient with tools was the right choice. But the session summary needed something more: three specialized agents running concurrently, each with scoped tools, feeding their results into a synthesis step. That’s where Microsoft Agent Framework comes in.public class SessionSummaryWorkflow(IChatClient chatClient, AgentTools tools){ public async Task ExecuteAsync() { ChatClientAgent pollAnalyst = new(chatClient, name: "PollAnalyst", description: "Analyzes poll results and trends", instructions: "You are a poll analyst. Use GetAllPollResults to retrieve every poll...", tools: [tools.GetAllPollResults]); ChatClientAgent questionAnalyst = new(chatClient, name: "QuestionAnalyst", description: "Analyzes audience questions and themes", instructions: "You are an audience question analyst...", tools: [tools.GetAudienceQuestions]); ChatClientAgent insightAnalyst = new(chatClient, name: "InsightAnalyst", description: "Analyzes generated insights and knowledge patterns", instructions: "You are an insight analyst...", tools: [tools.GetAllInsights, tools.SearchKnowledge]);Each ChatClientAgent wraps the same IChatClient. The agents get scoped tools (PollAnalyst only sees poll data, QuestionAnalyst only sees questions) and specialized instructions.The orchestration uses AgentWorkflowBuilder.BuildConcurrent for the fan-out, then WorkflowBuilder to compose the full pipeline: // Fan-out: three analysts run concurrently var analysisWorkflow = AgentWorkflowBuilder.BuildConcurrent( [pollAnalyst, questionAnalyst, insightAnalyst], MergeAgentOutputs); // Fan-in: synthesizer merges all findings ChatClientAgent synthesizer = new(chatClient, name: "Synthesizer", instructions: "Synthesize the analyses into one cohesive session summary..."); // Compose: concurrent analysis → sequential synthesis var analysisExec = new SubworkflowBinding(analysisWorkflow, "Analysis"); ExecutorBinding synthExec = synthesizer; var composedWorkflow = new WorkflowBuilder(analysisExec) .WithName("SessionSummaryPipeline") .BindExecutor(synthExec) .AddEdge(analysisExec, synthExec) .WithOutputFrom([synthExec]) .Build(); var run = await InProcessExecution.Default.RunAsync( composedWorkflow, "Analyze the conference session data and provide your specialized findings.");Compare this with the poll generation workflow from earlier, which is about 10 lines using IChatClient and tools. The session summary is about 40 lines because it genuinely needs concurrent agents with scoped tools and a synthesis step.In ConferencePulse, the Agent Framework was the right choice for exactly one workflow. Everything else worked well with IChatClient directly. Both approaches use the same underlying abstraction.How the building blocks fit togetherDuring the MVP Summit session, attendees interacted with features powered by different layers of the stack:FeaturePowered byPollsIChatClient + tools (MEAI)Knowledge groundingIngestionPipeline + VectorStoreWriterQ&A answersVectorData + IChatClient + MCPAuto-generated insightsIChatClient (single call)Session summaryMicrosoft Agent Framework (fan-out/fan-in)ObservabilityUseOpenTelemetry() + Aspire DashboardInfrastructureAspire: Qdrant + PostgreSQL + Azure OpenAIEach building block handles one concern and composes with the others. IChatClient shows up inside the ingestion enrichers, inside the agent tools, inside the MCP-augmented Q&A, and inside the Agent Framework’s ChatClientAgent. You learn it once and use it everywhere.Providers will change and models will evolve. The building blocks give you a stable layer to build on, and you swap implementations underneath without rewriting application code.Get startedWe’re excited to see what you build with these building blocks.Try ConferencePulse: the source is on GitHub. Clone it, run aspire run, and see the full stack in action.Learn more about the individual libraries:Microsoft.Extensions.AIMicrosoft.Extensions.VectorDataMicrosoft.Extensions.DataIngestionModel Context Protocol in .NETMicrosoft Agent FrameworkGive us feedback: file an issue in any of the repos or catch us on the .NET Community Standup.Now that you’ve seen how these building blocks compose, give them a try and let us know what you think.The post Building an AI-Powered Conference App with .NET’s Composable AI Stack appeared first on .NET Blog.