How to Deploy AI Agents with Amazon Bedrock AgentCore
Amazon Bedrock AgentCore — Quick Start Guide
Amazon Bedrock AgentCore is a fully managed AWS service to build, deploy, and operate AI agents securely at scale. It integrates seamlessly with popular frameworks — Strands Agents, LangGraph, CrewAI, and LlamaIndex — and handles complex operational tasks such as runtime setup, IAM role configuration, and observability.
This guide walks you through:
- Environment setup
- Creating and testing a local agent
- Deploying to AgentCore
- Invoking via AWS SDK
---
📑 Table of Contents
- Prerequisites
- Step 1 — AWS CLI Setup
- Step 2 — Install & Create Agent
- Virtual Environment
- Dependencies
- Agent Script
- Code Walkthrough
- Step 3 — Local Testing
- Step 4 — Deploy to AgentCore
- Step 5 — Invoke via AWS SDK
- Step 6 — Clean Up
- Troubleshooting
- Conclusion
- References
---
Prerequisites
Ensure you have:
- AWS account with CLI credentials
- AWS CLI installed
- Python 3.10+
- `boto3` installed
- Bedrock model access (e.g., Anthropic Claude Sonnet 4.0) enabled
---
Step 1 — AWS CLI Setup
Install the AWS CLI (docs).
Configure credentials:
aws configure
Or set up AWS SSO:
aws configure sso --profile my-profile
You will be prompted for:
- SSO start URL
- SSO region
- AWS Account ID
- Role name
- Default region & output format
---
✅ Verify Authentication
aws sts get-caller-identity --profile my-profile
This returns:
- Account ID
- UserId (IAM User/Role)
- ARN
If this works, you’re ready to use AWS SDK or Bedrock AgentCore.
---
Step 2 — Install & Create Agent
Virtual Environment
Create and activate a Python virtual environment:
macOS/Linux:
python3 -m venv .venv
source .venv/bin/activate
Windows:
python -m venv .venv
.venv\Scripts\activate
Deactivate with:
deactivate
---
Dependencies
Create `requirements.txt`:
bedrock-agentcore
strands-agents
Install:
pip install -r requirements.txt
---
Agent Script
Create `my_agent.py`:
from bedrock_agentcore import BedrockAgentCoreApp
from strands import Agent
app = BedrockAgentCoreApp()
agent = Agent()
@app.entrypoint
def invoke(payload):
"""AI agent function"""
user_message = payload.get("prompt", "Hello! How can I help you today?")
result = agent(user_message)
return {"result": result.message}
if __name__ == "__main__":
app.run()
---
Code Walkthrough
- `BedrockAgentCoreApp()` → Sets up the agent runtime container
- `Agent()` → Creates a basic Strands agent
- `@app.entrypoint` → Entry point for requests
- Reads `"prompt"` from payload (default message if not provided)
- Returns agent’s output
---
Step 3 — Local Testing
Run:
python3 -u my_agent.py
In separate terminal:
curl -X POST http://localhost:8080/invocations \
-H "Content-Type: application/json" \
-d '{"prompt": "Hello!"}'
Expected:
{"result": "Hello! I'm here to help..."}
Stop with `Ctrl+C`.
---
Step 4 — Deploy to AgentCore
Configure:
agentcore configure -e my_agent.py
Creates `bedrock_agentcore.yaml`.
Launch:
agentcore launch
Output includes:
- Agent ARN
- CloudWatch Logs location
---
Test Deployed Agent
agentcore invoke '{"prompt": "tell me a joke"}'
If you get a joke back, deployment succeeded.
---
Step 5 — Invoke via AWS SDK
Create `invoke_agent.py`:
import json
import boto3
agent_arn = "YOUR_AGENT_ARN"
prompt = "Tell me a joke"
agent_core_client = boto3.client("bedrock-agentcore")
payload = json.dumps({"prompt": prompt}).encode()
response = agent_core_client.invoke_agent_runtime(
agentRuntimeArn=agent_arn,
payload=payload
)
content = []
for chunk in response.get("response", []):
content.append(chunk.decode("utf-8"))
print(json.loads("".join(content)))
Run:
python invoke_agent.py
You should see the agent’s response.
---
Step 6 — Clean Up
Remove deployed runtime:
aws bedrock-agentcore delete-agent-runtime --agent-runtime-arn
---
Troubleshooting
- Permission denied → Check AWS credentials/IAM policies
- Docker warning → Ignore unless using `--local` build
- Model access denied → Enable model in Bedrock console
- Build errors → Check CloudWatch logs & IAM permissions
---
Conclusion
Amazon Bedrock AgentCore makes AI agent deployment fast and simple — no manual container setup needed. Prototype locally, deploy with one command, and monitor performance via CloudWatch.
For multi-platform publishing & monetization of your agent’s outputs, check AiToEarn — open-source AI content monetization platform supporting Douyin, Kwai, WeChat, Bilibili, Xiaohongshu, Facebook, Instagram, LinkedIn, Threads, YouTube, Pinterest, and X (Twitter). Learn more at AiToEarn Blog or see AI Model Rankings.
---