Learn how to design specialized agents with custom tool sets
This guide demonstrates how to create custom agents tailored for specific use cases. Using the planning agent as a concrete example, you’ll learn how to design specialized agents with custom tool sets, system prompts, and configurations that optimize performance for particular workflows.
The example showcases a two-phase workflow where a custom planning agent (with read-only tools) analyzes tasks and creates structured plans, followed by an execution agent that implements those plans with full editing capabilities.
#!/usr/bin/env python3"""Planning Agent Workflow ExampleThis example demonstrates a two-stage workflow:1. Planning Agent: Analyzes the task and creates a detailed implementation plan2. Execution Agent: Implements the plan with full editing capabilitiesThe task: Create a Python web scraper that extracts article titles and URLsfrom a news website, handles rate limiting, and saves results to JSON."""import osimport tempfilefrom pathlib import Pathfrom pydantic import SecretStrfrom openhands.sdk import LLM, Conversationfrom openhands.sdk.llm import content_to_strfrom openhands.tools.preset.default import get_default_agentfrom openhands.tools.preset.planning import get_planning_agentdef get_event_content(event): """Extract content from an event.""" if hasattr(event, "llm_message"): return "".join(content_to_str(event.llm_message.content)) return str(event)"""Run the planning agent workflow example."""# Create a temporary workspaceworkspace_dir = Path(tempfile.mkdtemp())print(f"Working in: {workspace_dir}")# Configure LLMapi_key = os.getenv("LLM_API_KEY")assert api_key is not None, "LLM_API_KEY environment variable is not set."model = os.getenv("LLM_MODEL", "anthropic/claude-sonnet-4-5-20250929")base_url = os.getenv("LLM_BASE_URL")llm = LLM( model=model, base_url=base_url, api_key=SecretStr(api_key), usage_id="agent",)# Task descriptiontask = """Create a Python web scraper with the following requirements:- Scrape article titles and URLs from a news website- Handle HTTP errors gracefully with retry logic- Save results to a JSON file with timestamp- Use requests and BeautifulSoup for scrapingDo NOT ask for any clarifying questions. Directly create your implementation plan."""print("=" * 80)print("PHASE 1: PLANNING")print("=" * 80)# Create Planning Agent with read-only toolsplanning_agent = get_planning_agent(llm=llm)# Create conversation for planningplanning_conversation = Conversation( agent=planning_agent, workspace=str(workspace_dir),)# Run planning phaseprint("Planning Agent is analyzing the task and creating implementation plan...")planning_conversation.send_message( f"Please analyze this web scraping task and create a detailed " f"implementation plan:\n\n{task}")planning_conversation.run()print("\n" + "=" * 80)print("PLANNING COMPLETE")print("=" * 80)print(f"Implementation plan saved to: {workspace_dir}/PLAN.md")print("\n" + "=" * 80)print("PHASE 2: EXECUTION")print("=" * 80)# Create Execution Agent with full editing capabilitiesexecution_agent = get_default_agent(llm=llm, cli_mode=True)# Create conversation for executionexecution_conversation = Conversation( agent=execution_agent, workspace=str(workspace_dir),)# Prepare execution prompt with reference to the plan fileexecution_prompt = f"""Please implement the web scraping project according to the implementation plan.The detailed implementation plan has been created and saved at: {workspace_dir}/PLAN.mdPlease read the plan from PLAN.md and implement all components according to it.Create all necessary files, implement the functionality, and ensure everythingworks together properly."""print("Execution Agent is implementing the plan...")execution_conversation.send_message(execution_prompt)execution_conversation.run()# Get the last message from the conversationexecution_result = execution_conversation.state.events[-1]print("\n" + "=" * 80)print("EXECUTION RESULT:")print("=" * 80)print(get_event_content(execution_result))print("\n" + "=" * 80)print("WORKFLOW COMPLETE")print("=" * 80)print(f"Project files created in: {workspace_dir}")# List created filesprint("\nCreated files:")for file_path in workspace_dir.rglob("*"): if file_path.is_file(): print(f" - {file_path.relative_to(workspace_dir)}")# Report costcost = llm.metrics.accumulated_costprint(f"EXAMPLE_COST: {cost}")
You can run the example code as-is.
The model name should follow the LiteLLM convention: provider/model_name (e.g., anthropic/claude-sonnet-4-5-20250929, openai/gpt-4o).
The LLM_API_KEY should be the API key for your chosen provider.
ChatGPT Plus/Pro subscribers: You can use LLM.subscription_login() to authenticate with your ChatGPT account and access Codex models without consuming API credits. See the LLM Subscriptions guide for details.
Custom agents can use specialized system prompts to guide behavior. The planning agent uses system_prompt_planning.j2 with injected plan structure that enforces:
Objective: Clear goal statement
Context Summary: Relevant system components and constraints
Approach Overview: High-level strategy and rationale
Implementation Steps: Detailed step-by-step execution plan
Testing and Validation: Verification methods and success criteria