How To Build an AI Health Care Agent on Amazon Bedrock

Wait 5 sec.

When I first heard about AWS Bedrock Flows, I was intrigued. Imagine dragging and dropping your way to a fully functional AI assistant — no complex coding, no infrastructure headaches, just pure creative problem-solving. It’s like a dream come true, especially for someone like me who often prototypes intelligent assistants.In this tutorial, I’ll walk you through how I built a simple health care assistant using Amazon Bedrock Flows. The assistant can answer questions about diseases from a knowledge base, and it can fetch patient details using an agent connected to an AWS Lambda function.This project is perfect if you’re curious about real-world applications of generative AI in health care, or if you just want to get hands-on with Amazon Bedrock Flows in a structured and practical way.NOTE: The data used to build this application is dummy and has been generated programmatically.What You’ll Need: Tools and AWS ServicesTo begin, let’s get your tech stack ready. Here’s what I used to build this project:Amazon Bedrock: The core service for accessing foundation models securely.Bedrock Flows: Used to visually design the assistant’s interaction logic.Knowledge Base: Stores disease-related medical content.Agent: Handles patient queries and integrates with Lambda.AWS Lambda: Used to simulate patient data responses.Amazon S3: Stores the files for the Knowledge Base.Amazon DynamoDB: Used to keep patient recordsAmazon Aurora PostgreSQL Serverless: Used to keep embeddings for the Knowledge BaseOur Use Case OverviewOur goal is to create a conversational assistant that can:Retrieve patient data based on an ID (such as “What is the condition of patient 123?”)Provide information about diseases ( “What are the symptoms of malaria?”)Real-world example:Doctor: “Tell me about patient 456.”Assistant: “Patient 456 is stable and diagnosed with malaria.”Doctor: “What are the symptoms of malaria?”Assistant: “Common symptoms include fever, chills and muscle aches.”We’ll achieve this by combining an AI agent (with Lambda function for patient data) and a Knowledge Base (with disease content).Step 1: Prepare the Knowledge Base with Disease Data1.1 Upload to S3Upload the files to an S3 bucket in your account. Keep note of the folder URL that will be used while creating the Knowledge Base.Dummy disease data in S3.1.2 Create the Knowledge BaseGo to Amazon Bedrock > Knowledge BasesClick Create knowledge base.Choose Knowledge Base with Vector Store.Name it knowledge-base-dummy-disease-data.For IAM permissions, choose Create and use a new service role.Choose Amazon S3 as the query engine and click Next.Name your data source and choose the folder from S3 where you have kept the data files and click Next.Choose an embedding model. I am using Amazon Titan Text Embeddings V2.Select a vector store. I am using Amazon Aurora PostgreSQL. Serverless to save cost; click Next.Review and click Create Knowledge Base. It can take a few minutes depending on the file size.Once created, go to Knowledge Base, choose the data source name and click on the Sync button on the Data Source tab. It is important; otherwise the data may not be visible.Knowledge Base Sync button.Click on Test Knowledge Base from top right. Choose model (I am using Amazon Nova Lite) and test queries like:“What are the symptoms of COVID-19?”Step 2: Create the Agent To Handle Patient Data and the Knowledge Base Queries2.1 Create DynamoDB Table and Load DataGo to DynamoDB service in the AWS console and create a table named DummyPatientTable and execute the below script. Keep the AWS credentials in an .env file or as an environmental variable.import boto3import randomimport jsonfrom datetime import datetime, timedeltafrom dotenv import load_dotenvimport os# Load AWS credentials from .env fileload_dotenv()aws_access_key_id = os.getenv("AWS_ACCESS_KEY_ID")aws_secret_access_key = os.getenv("AWS_SECRET_ACCESS_KEY")region_name = os.getenv("AWS_REGION", "us-east-1")# Create DynamoDB resourcedynamodb = boto3.resource( 'dynamodb', aws_access_key_id=aws_access_key_id, aws_secret_access_key=aws_secret_access_key, region_name=region_name)# Reference the tabletable = dynamodb.Table('DummyPatientTable')# Sample values for policy types and conditionspolicy_types = ['Basic', 'Premium', 'Platinum']conditions = ['Diabetes', 'Hypertension', 'Asthma', 'Healthy', 'Alzheimer\'s Disease', 'Fibromyalgia', 'Arthritis', 'Stroke', 'Kidney Disease', 'High Blood Pressure', 'Heart Disease']statuses = ['active', 'inactive', 'pending']# Generate and insert mock datafor patient_id in range(1, 201): # Generates patients records from 1 to 200 item = { 'patient_id': patient_id, 'policy_type': random.choice(policy_types), 'status': random.choice(statuses), 'condition': random.choice(conditions), 'last_activity_date': (datetime.today() - timedelta(days=random.randint(0, 365))).strftime('%Y-%m-%d') } response = table.put_item(Item=item) print(f"Inserted patient_id {patient_id}: {response['ResponseMetadata']['HTTPStatusCode']}")Check the DynamoDB for the created data. The data will look like this:DynamoDB patient data2.2 Create the AgentGo to Amazon Bedrock > Agents.Click Create Agent.Name it get-patient-data-dynamodb.Choose Create and use a new service role.For model, select Amazon Nova Micro or you can choose any Amazon Nova model.For instructions, write this:When the user submits a query, determine whether they are asking for patient-specific information (based on a patient ID or number) or general disease-related information.If the query includes a patient ID (e.g., a number), call the `get_patient_record` function from the PatientRecords action group. The `patient_id` is the same as the `patient_record` identifier.If a matching record is found, return the patient's details in a clear, formatted manner — with each attribute (ID, condition, status) on its own line.If no record is found, respond with: “No record exists for that patient.”Always return available information without asking for confirmation.If the query is about a disease or symptoms (such as COVID-19, asthma), fetch the answer from the linked Knowledge Base.Use best judgment to interpret the query. Do not prompt the user for clarification. Always respond with the most relevant information based on what is available.Add an action group with an action like PatientRecords.Choose action group typeDefine with Function Details and create a new Lambda function.Choose get_patient_record as action group function.Set patient_id as an input slot.Save it.2.3 Code for Lambda FunctionOpen the newly created Lambda function and add the code below:import jsonimport boto3from boto3.dynamodb.conditions import Key# Initialize DynamoDB resourcedynamodb = boto3.resource('dynamodb')table = dynamodb.Table('DummyPatientTable') def patient_detail(payload): try: patient_id = int(payload['parameters'][0]['value']) # Query the DynamoDB table for the patient ID resp = table.query( KeyConditionExpression=Key('patient_id').eq(patient_id) ) if resp['Items']: return { "status": "success", "data": resp['Items'][0] } else: return { "status": "error", "message": "No record found" } except Exception as e: return { "status": "error", "message": str(e) }def lambda_handler(event, context): response_body = patient_detail(event) function_response = { 'responseBody': { 'TEXT': { 'body': json.dumps(response_body) } } } action_response = { 'messageVersion': '1.0', 'response': { 'actionGroup': event['actionGroup'], 'function': event['function'], 'functionResponse': function_response }, 'sessionAttributes': event.get('sessionAttributes', {}), 'promptSessionAttributes': event.get('promptSessionAttributes', {}) } return action_response2.4 Attach the Knowledge Base With the AgentHere, we have a choice either to use the Knowledge Base separately from Amazon Bedrock Flows or attach it to the agent and let the agent make the choice. It depends on our use case. Usually, if the AI workflow is not complex, we can attach it or keep it separate. I am attaching the Knowledge Base with the agent.In the agent, go to the Knowledge Base tab and add your Knowledge Base.Knowledge Base2.5 Test the AgentClick the button Prepare and then test the agent from the left panel:Try:“What is the condition of patient 123?” — It will fetch data from DynamoDB using the Lambda function.“What are the symptoms of COVID-19.” — It will fetch data from the Knowledge Base.Step 3: Build the Amazon Bedrock FlowsNow we bring it all together in the Amazon Bedrock Flows.3.1 Create a FlowGo to Amazon Bedrock and click Flows.Click Create Flow and choose a meaningful name. A visual builder will open having input, output and prompt nodes.Add an agent node.Now click on Prompt Node, choose Define in node, choose a model like Amazon Nova Lite and in the prompt you can add:Analyze the user input: {{input}}. If it contains any number, treat that number as a patient ID and pass the full input to the agent to retrieve patient information. If the input appears to be asking about a disease, symptoms or treatment, pass it to the agent to fetch disease-related information. Do not ask the user for clarification. Use your best judgment to decide the intent based on the input and route it accordingly.Now click on the agent node, select your agent and alias.Save the flow.Amazon Bedrock Flows3.2 Test the OutputTest the flow by asking questions:Try:“What is the condition of patient 123?”“What are the symptoms of COVID-19?”Test Flows3.3 Publishing, Versioning and AliasSave and exit the flow, then click publish to create a new version. Navigate to the Alias section and create an alias that links to this version. Aliases allow you to switch between different versions in production environments without requiring code changes, providing seamless version management for your deployed agent.Step 4: Create a Streamlit AppBuild a Streamlit application that integrates with your Amazon Bedrock Flows to provide an intuitive user interface for interacting with your AI health agent.Save the below code as streamlit-bedrock-flow.py.import jsonimport boto3import streamlit as stfrom dotenv import load_dotenvimport os# Load environment variables from .envload_dotenv()aws_access_key_id = os.getenv("AWS_ACCESS_KEY_ID")aws_secret_access_key = os.getenv("AWS_SECRET_ACCESS_KEY")region_name = os.getenv("AWS_REGION", "us-east-1")FLOW_ID = os.getenv("FLOW_ID")FLOW_ALIAS_ID = os.getenv("FLOW_ALIAS_ID")# Initialize Streamlit appst.title("Amazon Bedrock Flow integration")# Initialize session stateif 'execution_id' not in st.session_state: st.session_state.execution_id = Noneif 'input_required' not in st.session_state: st.session_state.input_required = None# Create input format for Bedrock flowdef create_input_data(text, node_name="FlowInputNode", is_initial_input=True): data = { "content": {"document": text}, "nodeName": node_name } if is_initial_input: data["nodeOutputName"] = "document" else: data["nodeInputName"] = "agentInputText" return data# Invoke Bedrock Flowdef invoke_flow(client, flow_id, flow_alias_id, input_data, execution_id=None): request = { "flowIdentifier": flow_id, "flowAliasIdentifier": flow_alias_id, "inputs": [input_data], "enableTrace": True } if execution_id: request["executionId"] = execution_id response = client.invoke_flow(**request) flow_status = "" input_required = None execution_id = response.get('executionId', execution_id) for event in response['responseStream']: if 'flowCompletionEvent' in event: flow_status = event['flowCompletionEvent']['completionReason'] elif 'flowMultiTurnInputRequestEvent' in event: input_required = event elif 'flowOutputEvent' in event: st.subheader("Response:") st.write(event['flowOutputEvent']['content']['document']) return { "flow_status": flow_status, "input_required": input_required, "execution_id": execution_id }# Create Boto3 clientsession = boto3.Session( aws_access_key_id=aws_access_key_id, aws_secret_access_key=aws_secret_access_key, region_name=region_name)client = session.client('bedrock-agent-runtime')# Input sectionif st.session_state.input_required: prompt = st.session_state.input_required['flowMultiTurnInputRequestEvent']['content']['document'] user_input = st.text_input("Additional Info:", value=prompt)else: user_input = st.text_input("Ask your question:")# Submit buttonif st.button("Submit"): if user_input: with st.spinner("Processing..."): if st.session_state.execution_id is None: input_data = create_input_data(user_input, is_initial_input=True) else: node_name = st.session_state.input_required['flowMultiTurnInputRequestEvent']['nodeName'] input_data = create_input_data(user_input, node_name=node_name, is_initial_input=False) result = invoke_flow(client, FLOW_ID, FLOW_ALIAS_ID, input_data, st.session_state.execution_id) if result: st.session_state.execution_id = result['execution_id'] if result['flow_status'] == "INPUT_REQUIRED": st.session_state.input_required = result['input_required'] else: st.success("Flow completed.") st.session_state.execution_id = None st.session_state.input_required = None else: st.warning("Please enter something.")Read the AWS credentials, FLOW_ID, FLOW_ALIAS_ID from the environment variables.Execute the code with streamlit run streamlit-bedrock-flow.py.The app will look like this:Streamlit appHealth Care AI, SimplifiedWe’ve just built a sophisticated AI health care assistant in hours instead of months. Amazon Bedrock Flows transforms what used to require complex coding into a simple drag-and-drop process.There’s a bigger picture here, outside of health care. These same patterns work for legal research, financial advisory, education and customer service. We’re entering an age where building AI applications is as easy as creating a website. And it means that the next breakthrough might come from a small clinic or individual practitioner rather than the usual tech giant, because now anyone with domain expertise and the right tech experts on hand can build the AI tools they need.Discover how to build intelligent Python systems that think, adapt, and execute tasks autonomously with Andela’s step-by-step agentic workflow guide.The post How To Build an AI Health Care Agent on Amazon Bedrock appeared first on The New Stack.