This is a guest blog from Syncfusion. Learn more about the free, open-source Syncfusion Toolkit for .NET MAUI.As a proud partner with the .NET MAUI team, Syncfusion is excited to share how custom-built AI agents are dramatically improving the development workflow and contributor experience for the entire .NET MAUI community.The Traditional Contributor ChallengeContributing to .NET MAUI has historically required significant time investment for even straightforward bug fixes. Our team identified key bottlenecks in the contribution workflow:Issue Reproduction – Setting up the Sandbox app and reproducing platform-specific issues (30-60 minutes)Root Cause Analysis – Debugging across multiple platforms and handlers (1-3 hours)Fix Implementation – Writing and testing the fix (30-120 minutes)Test Creation – Developing comprehensive test coverage (1-2 hours)For community contributors new to the repository, this could easily extend to days of effort, creating a significant barrier to entry.Our Solution: Custom-Built AI Agents and Skills for .NET MAUIThe .NET MAUI team has developed a suite of specialized agents and skills that work together to streamline the entire contribution lifecycle. Syncfusion’s team has been leveraging these to dramatically accelerate our .NET MAUI contributions.pr-review skill: Intelligent Issue Resolution with Built-In Quality AssuranceThe pr-review skill implements a systematic 4-phase workflow that handles the complete pull request lifecycle:Phase 1: Pre-Flight AnalysisThe skill begins by conducting a comprehensive issue analysis:Reads the GitHub issue and extracts reproduction stepsAnalyzes the codebase to understand affected componentsIdentifies platform-specific considerations (Android, iOS, Windows, Mac Catalyst)Phase 2: Gate – Test VerificationBefore any fix is attempted, the skill verifies that tests exist and correctly catch the issues:Checks if tests exist for the issue/PRIf tests are missing, notifies the user to create them first using write-tests-agentValidates that existing tests actually fail without a fix (proving they catch the bug)NoteThe recommended workflow is to use write-tests-agent first to create tests, then use the pr-review skill to verify and work on the fix.Phase 3: Try-Fix – Multi-Attempt Problem SolvingThis is where the skill’s intelligence shines. Using the try-fix skill with 4 AI models, the skill:Proposes independent fix approaches – Up to 4 different strategies, each taking a unique angleApplies and tests empirically – Runs the test suite after each fix attemptRecords detailed results for comparisonExample try-fix workflow:Attempt 1: Handler-level fix in CollectionViewHandler → Tests pass on iOS, fail on AndroidAttempt 2: Platform-specific fix in Items2 → Tests pass on all platforms, but causes regressionAttempt 3: Core control fix with platform guards → All tests pass, no regressions✓ Attempt 3 selected as optimal solutionPhase 4: Report GenerationThe skill produces a comprehensive summary including:Fix description and approach rationaleTest results (before/after comparison)Alternative approaches attempted and why they weren’t selectedRecommendation (approve PR or request changes)write-tests-agent: Intelligent Test Strategy SelectionThe write-tests-agent acts as a test strategist that determines the optimal testing approach for each scenario.Multi-Strategy Test CreationThe agent analyzes the issue and selects appropriate test types:For UI Interaction Bugs:Invokes write-ui-tests skillCreates Appium-based tests in TestCases.HostApp and TestCases.Shared.TestsAdds proper AutomationId attributes for element locationImplements platform-appropriate assertionsFor XAML Parsing and Compilation Bugs:Invokes write-xaml-tests skillCreates tests in Controls.Xaml.UnitTests projectTests XAML parsing, XamlC compilation, and source generationValidates markup extensions and binding syntaxTests across all three XAML inflators (Runtime, XamlC, SourceGen)Future Test Types:Unit Tests (API behavior, logic, calculations)Device Tests (platform-specific API testing)Integration Tests (end-to-end scenarios)Test Verification: Fail → Pass ValidationA critical feature of write-tests-agent is its use of the verify-tests-fail-without-fix skill:Mode 1: Verify Failure Only (Test Creation — no fix yet)Use when writing tests before a fix exists:Run tests against the current codebase (which still has the bug)Verify tests FAIL (proving they correctly detect the bug)✓ Tests confirmed to reproduce the issueNo files are reverted or modified. This is a single test run that validates your tests actually catch the bug.Mode 2: Full Verification (Fix + Test Validation)Use when a PR contains both a fix and tests:Revert fix files to pre-fix state (test files remain unchanged throughout)Run tests → Should FAIL (bug is present without fix)Restore fix filesRun tests → Should PASS (fix resolves bug)✓ Both tests and fix verifiedThis verification step ensures test quality — we avoid the common problem of tests that pass regardless of whether the bug is fixed.Comprehensive Coverage Through Multiple Test TypesWhen appropriate, write-tests-agent creates layered test coverage:Example: CollectionView Scrolling BugUI Test – Appium test that scrolls and verifies visual positioningXAML Test – Validates that the ItemTemplate XAML compiles correctly across all inflators (Runtime, XamlC, SourceGen)This dual-layer approach provides both behavioral validation (does the scrolling work?) and structural validation (does the XAML compile correctly?).NoteAs more test type skills are added (device tests, unit tests), the agent will be able to provide even more comprehensive coverage across different levels of the stack.sandbox-agent: Manual Testing and ValidationThe sandbox-agent complements automated testing with manual validation capabilities:Creates test scenarios in the Controls.Sample.Sandbox appBuilds and deploys to iOS simulators, Android emulators, or Mac CatalystGenerates Appium test scripts for automated interactionWhen to use sandbox-agent:Functional validation of PR fixes before mergeReproducing complex user-reported issuesVisual verification of layout and rendering bugsTesting scenarios that are difficult to automatelearn-from-pr agent: Continuous ImprovementThe learn-from-pr agent analyzes completed PRs to extract lessons learned and applies improvements to instruction files, skills, and documentation — creating a feedback loop that makes the entire system smarter over time.How to Use These Tools: Prompt ExamplesUsing the pr-review skill to Fix an IssueWhen you want to create a fix for a GitHub issue, use the pr-review skill to guide you through the entire workflow.TipThese prompts are typed directly in the GitHub Copilot CLI terminal while inside the cloned .NET MAUI repository. The skill reads your local repository files, runs builds and tests on your machine, and interacts with GitHub APIs.Basic fix invocation:Fix issue #67890With additional context for complex scenarios:Fix issue #67890. The issue appears related to async lifecycle eventsduring CollectionView item recycling. Previous attempts may have failedbecause they didn't account for view recycling on Android.For alternative fix exploration:Fix issue #67890. Try a handler-level approach first. If that doesn't work,consider modifying the core control with platform guards.The skill will:Analyze the issue and codebase (Pre-Flight)Check if tests exist; if not, notify you to create them with write-tests-agent (Gate)Verify tests fail without fix and pass with fix (Validation)Try up to 4 different fix approaches across 4 AI models (Try-Fix)Using the pr-review skill to Review a Pull RequestWhen reviewing an existing PR (yours or someone else’s), use the pr-review skill to validate the fix and ensure quality:Basic PR review:Review PR #12345With focus areas:Review PR #12345. Focus on thread safety in the async handlersand ensure Android platform-specific code follows conventions.For test coverage validation:Review PR #12345. Verify that the tests actually reproduce the bugand cover all affected platforms (iOS and Mac Catalyst).The skill will:Analyze the PR changes and linked issueCheck if tests exist; if not, notify you to create them with write-tests-agent (Gate)Verify tests fail without the fix and pass with it (Validation)Provide a detailed review reportImportantFix issue #XXXXX creates a new fix from scratch. Review PR #XXXXX validates and improves an existing PR. The skill adapts its workflow based on whether you’re creating or reviewing.Writing Tests with write-tests-agentSimple invocation:Write tests for issue #12345Specifying test type:Write UI tests for issue #12345 that reproduce the button click behavior.The agent analyzes the issue, selects appropriate test types, and creates comprehensive coverage. If you provide hints about reproduction steps or failure conditions, it incorporates them into the test strategy.Testing with sandbox-agentBasic testing:Test PR #12345 in SandboxPlatform-specific testing:Test PR #12345 on iOS 18.5. Focus on the layout changes in SafeArea handling.Reproducing user-reported issues:Reproduce issue #12345 in Sandbox on Android. The user reported it happenswhen rotating the device while a dialog is open.Multi-Model Architecture for Quality AssuranceThe pr-review skill leverages 4 AI models sequentially in Phase 3 (Try-Fix) to provide comprehensive solution exploration:OrderModelPurpose1Claude Opus 4.6First fix attempt – deep analysis and reasoning2Claude Sonnet 4.6Second attempt – balanced speed and quality3GPT-5.3-CodexThird attempt – code-specialized model4Gemini 3 Pro PreviewFourth attempt – different model family perspectiveWhy sequential, not parallel?Only one Appium session can control a device or emulator at a time; parallel runs would interfere with each other’s test executionAll try-fix runs modify the same source files — simultaneous changes would overwrite each other’s code and corrupt the working treeEach model runs in a completely separate context with zero visibility into what other models are doing, ensuring every fix attempt is genuinely independent and uninfluencedBefore each new model starts, a mandatory cleanup restores the working tree to a clean state — reverting any files the previous attempt modified, ensuring every model begins from an identical baselineCross-Pollination Rounds:The 4 models don’t just run once — they participate in multiple rounds of cross-pollination:Round 1: Each model independently proposes and tests one fix approach (4 attempts total)Round 2: Each model reviews all Round 1 results and decides:“NO NEW IDEAS” — Confirms exploration is exhausted for this model“NEW IDEA: [description]” — Proposes a new approach that hasn’t been triedRound 3 (if needed): Repeat until all 4 models confirm “NO NEW IDEAS” (max 3 rounds)This ensures comprehensive exploration — models see what failed, why it failed, and what succeeded, allowing them to propose fundamentally different approaches.This multi-model approach ensures:Diverse solution exploration — Each model brings different problem-solving patternsComprehensive fix coverage — 4 independent attempts with different AI architecturesLearning from failures — Later models see why earlier attempts failedReduced hallucination — Multiple models must independently solve the problemBest fix selection — Data-driven comparison across 4 different approachesThe try-fix skill benefits most from this architecture — each model proposes an independent fix, tests it empirically, and records detailed results for comparison.Measurable Impact on Team ProductivitySince implementing these agents, we’ve observed significant improvements across our team:TaskBefore (Manual)After (Agents)Time SavedIssue reproduction30-60 min5-10 min~50 minRoot cause analysis1-3 hours20-40 min~1.5 hoursImplementing fix30-120 minAutomated~1 hourWriting tests1-2 hours10-20 min~1.5 hoursExploring alternatives Not feasible Built-inPricelessTotal per issue4-8 hours45 min – 2.5 hours~4-5 hoursThat’s a 50-70% time reduction per issue. Our team can now address 2-3x more issues per week while maintaining higher quality standards.Quality ImprovementsBeyond time savings, we’ve seen measurable quality improvements:Test Coverage: 95%+ of PRs now include comprehensive test coverage (up from ~60%)First-Time Fix Rate: 80% of fixes work correctly on first attempt (up from ~50%)Code Review Cycles: Reduced back-and-forth during reviewThe Skills Ecosystem: Composable CapabilitiesThese agents are built on a foundation of reusable skills — modular capabilities that can be composed together for different workflows.Core Skillstry-fixProposes ONE independent fix approach per invocationApplies fix, runs tests, captures resultsRecords failure analysis for learningIterated up to 3 times per model if errors occurwrite-ui-testsCreates test pages in TestCases.HostApp/Issues/Generates Appium tests in TestCases.Shared.Tests/Tests/Issues/Adds AutomationIds for element locationImplements platform-appropriate assertionswrite-xaml-testsCreates XAML test files in Controls.Xaml.UnitTests/Issues/Tests across Runtime, XamlC, and SourceGen inflatorsValidates XAML parsing, compilation, and code generationHandles special file extensions (.rt.xaml, .rtsg.xaml) for invalid code generation casesverify-tests-fail-without-fixMode 1 (Failure Only): Run tests once to verify they FAIL, proving they catch the bugMode 2 (Full Verification): Revert fix files → tests FAIL → restore fix → tests PASSTest files are never reverted — only fix files are manipulatedEnsures test quality by proving tests detect the bugSupporting Skillsazdo-build-investigatorQueries Azure DevOps for PR build informationRetrieves failed job detailsDownloads Helix test logs for investigationIdentifies build failures and test failuresrun-device-testsExecutes device tests locally on iOS/Android/Mac CatalystSupports test filtering by categoryManages device/simulator lifecycleCaptures test results and logspr-finalizeVerifies PR title and description match implementationPerforms final code review for best practicesUsed before merging to ensure quality and documentationWhy Skills MatterSkills provide:Reusability — Same skill used across multiple agents and workflowsTestability — Each skill can be tested and improved independentlyComposability — Agents combine skills to create complex workflowsImpact on Open Source CommunityThese agents aren’t just improving our internal team productivity — they’re transforming the contributor experience for the entire .NET MAUI community.Lowering the Barrier to EntryBefore: New contributors faced a steep learning curve:Understanding multi-platform handler architectureKnowing which test type is appropriateFollowing undocumented platform-specific conventionsNavigating complex build and test infrastructureNow: Agents automatically:Generate platform-appropriate code patternsSelect and create correct test typesFollow repository conventions automaticallyHandle build and test infrastructure complexityImproving Contribution QualityEvery PR now benefits from: Comprehensive test coverage — Multiple test types covering different scenarios Alternative fix exploration — Data-driven comparison of approaches Automated code review — Catches common issues before human reviewAccelerating the Contribution CycleMaintainer perspective:Fewer back-and-forth review cyclesLess time requesting test coverageReduced need to explain platform-specific conventionsHigher confidence in community PRsContributor perspective:Faster feedback through automated validationClear guidance when fixes don’t workLearning repository best practices through agent interactionsGreater confidence in submitting PRsGetting Started as a ContributorWe encourage the community to leverage these agents when contributing to .NET MAUI.Step 1: Set Up GitHub Copilot CLIInstall GitHub Copilot CLI and authenticate it. See the GitHub Copilot CLI Documentation for setup instructions.Step 2: Find an Issue to Work OnBrowse our issue tracker for contribution opportunities:Good First Issues — Great for new contributorsHelp Wanted — Community contributions welcomePriority Issues — High-impact fixesStep 3: Use the Agents and SkillsFor issue fixes:First, write tests:Write tests for issue #12345Then, implement the fix:Fix issue #12345For PR review and improvement:Review PR #12345The workflow:write-tests-agent creates tests (UI, XAML) and verifies they catch the bugpr-review skill verifies tests exist, explores fix alternatives, compares approachesHuman reviews and refines the outputNoteIf you run “Fix issue #12345” without tests, the pr-review skill will notify you to create them first using write-tests-agent.Step 4: Review and RefineThe agents produce high-quality output, but human review is essential:Verify the fix addresses the root causeCheck that tests cover edge casesEnsure code follows .NET MAUI conventionsAdd additional context where neededStep 5: Submit Your PRWith agent assistance, your PR will typically include:Working fix with clear rationaleComprehensive test coverageProper commit messages and PR descriptionValidation that tests prove the fix worksThis significantly increases merge rates and reduces review cycles.Hypothetical Example: From Issue to Merged PRLet’s walk through a typical contribution workflow with agents:Issue #12345: CollectionView items disappear on iOS when scrolling rapidlyTraditional Workflow (4-6 hours):Set up Sandbox app with CollectionView (30 min)Try to reproduce on iOS simulator (45 min)Debug handler code to find root cause (2 hours)Implement fix in Items2/iOS/ (1 hour)Agent-Assisted Workflow:Step 1: Create tests firstWrite tests for issue #12345Step 2: Fix the issueFix issue #12345The pr-review skill executes:Pre-Flight (5-10 min) — Reads issue, identifies iOS-specific CollectionView scrolling bug, analyzes Items2/iOS/ handler code, identifies potential causes: view recycling, async loadingGate — Verifies tests from Step 1 catch the bugTry-Fix (20-40 min) — Tries up to 4 fix approaches across 4 models, tests each empiricallyReport — Compares all approaches, selects optimal solutionResult: High-quality PR ready in under an hour instead of half a day.ConclusionThe introduction of custom-built AI agents has fundamentally changed how our team approaches .NET MAUI development. By automating the mechanical aspects of issue resolution — reproduction, testing, fix exploration — we can focus on what matters most: understanding the problem, reviewing solutions, and ensuring quality.Key Takeaways50-70% reduction in time per issue2-3x increase in issues addressed per week95%+ test coverage on new PRs (up from ~60%)Lower barrier to community contributionHigher quality through multi-model fix explorationWe invite the .NET community to experience this new workflow. Your contributions make .NET MAUI better for millions of developers worldwide, and our agents are here to make that contribution process as smooth as possible.ResourcesDocumentation.NET MAUI Repositorypr-review Skill Documentationwrite-tests-agent Documentationsandbox-agent Documentationlearn-from-pr Agent DocumentationSkills DirectoryContributing GuideGetting StartedGitHub Copilot CLI DocumentationGood First Issues.NET MAUI DiscussionsWe’d love to hear about your experience using these agents. Share your success stories, challenges, and suggestions in the dotnet/maui discussions or on social media with #dotnetMAUI.Happy coding! The post Accelerating .NET MAUI Development with AI Agents appeared first on .NET Blog.