TL;DRAzure AI Foundry now lets you ingest data directly from Azure Blob Storage, ADLS Gen2, or Microsoft OneLake and create an Azure AI Search index in just one click. When you create an agent in Azure AI Foundry one of the most powerful steps is “Add knowledge”—grounding your agent with your enterprise data so it can answer questions and act with context. Previously, this required you to bring an existing Azure AI Search index and configure it before you could connect your data. That meant extra setup steps and more friction, especially if you were just experimenting. Today, we’re making this much simpler. Why This MattersGrounding (a.k.a. retrieval augmentation) is one of the highest‑leverage steps in agent development. But the traditional workflow—provision a search service, design an index, run an ingestion pipeline, create skillsets, then wire it to your agent—adds friction when you simply want to test a hypothesis or enable a new scenario.Now you can collapse that entire path into a single, integrated flow inside Azure AI Foundry. You focus on: (1) choosing a data source, (2) selecting an embedding model, and (3) clicking create. Foundry orchestrates ingestion, chunking, embedding, and vector index creation for you.What’s NewYou can now natively create an Azure AI Search vector index inside Foundry during the "Add knowledge" step of agent creation or editing.Supported data sources (initial wave) Azure Blob Storage Azure Data Lake Storage (ADLS) Gen2 Microsoft OneLake (Fabric)Key capabilitiesCapabilityDescriptionInline index creationNo pre-existing Search index required.Automatic ingestionContent is pulled, chunked, and prepared for embeddings.Embedding model selectionChoose from supported embedding models at creation time.Hybrid-readyIndex configured for combined vector + keyword retrieval.Secure by designRespects Azure RBAC & network isolation of underlying resources.How It Works Open (or create) an agent in Azure AI Foundry. Select Add knowledge. Choose a supported data source (Blob / ADLS Gen2 / OneLake). Authorize the connection (if first time) and pick containers / paths. Select an Azure OpenAI embedding model (e.g., text-embedding-*). Click Create index & ingest. Foundry: pulls content → chunks documents → generates embeddings → provisions (or reuses) an Azure AI Search index optimized for hybrid queries. Your agent can now answer grounded questions immediately.No separate indexing pipeline. No manual schema definition. No script to run. Just connect data and go.Try It TodayGet started by our tutorial on How to create an Azure AI Search index in Foundry.Related Resources Azure AI Search Concepts: https://learn.microsoft.com/azure/search/search-what-is-azure-search Hybrid Retrieval Overview: https://learn.microsoft.com/azure/search/hybrid-search-overview Embeddings Models in Foundry: https://learn.microsoft.com/azure/ai-foundry/openai/concepts/models Latest Agentic Retrieval (preview) in Azure AI Search: https://techcommunity.microsoft.com/blog/azure-ai-foundry-blog/agentic-retrieval-updates-in-azure-ai-search/4450621Happy grounding—can’t wait to see what you build. Share launches with #AzureAIFoundry!