A newer version of the Gradio SDK is available:
6.2.0
LangGraph Bird Classification Agent - Complete Guide
A beginner-friendly guide to setting up and using the bird classification agent system.
Table of Contents
π First time here? Start with Installation β Configuration β CLI Commands π¦ Want interactive chat? Jump to eBird Server Setup
- What is This?
- Prerequisites
- Installation
- Configuration
- Usage Guide
- eBird Server Setup β Important for interactive mode
- CLI Commands
- Programmatic Usage
- Advanced Configuration
- Troubleshooting
- Testing
- Examples
What is This?
This is an intelligent AI agent that can:
- Identify birds from images (via URLs or uploads)
- Answer questions about bird species
- Find bird sightings near locations (when connected to eBird server)
- Recommend birding hotspots
- Hold conversations with memory of previous messages
Technology Stack:
- LangGraph: Agent framework
- LangChain: LLM integration
- MCP (Model Context Protocol): Tool integration
- Modal: GPU-powered bird classifier
- OpenAI GPT: Agent reasoning
Prerequisites
Required
Python 3.11+
python --version # Should be 3.11 or higherVirtual Environment (recommended)
python -m venv .venv-hackathon source .venv-hackathon/bin/activate # Mac/Linux # or .venv-hackathon\Scripts\activate # WindowsAPI Keys:
- OpenAI API Key (for GPT models) - Get from: https://platform.openai.com/api-keys
- Modal Bird Classifier URL (from Phase 1 deployment)
- Bird Classifier API Key (from Modal secrets)
Optional (for multi-server setup)
- eBird API Key (for bird data) - Get from: https://ebird.org/api/keygen
- eBird MCP Server running locally or deployed
Installation
Step 1: Navigate to Directory
cd /Users/jacobbinder/Desktop/hackathon/hackathon_draft
Step 2: Activate Virtual Environment
source ../.venv-hackathon/bin/activate
Step 3: Install Dependencies
# Install all required packages
pip install -r langgraph_agent/requirements.txt
What gets installed:
langchain- LLM frameworklangchain-openai- OpenAI integrationlanggraph- Agent frameworklangchain-mcp-adapters- MCP protocol supportfastmcp- MCP client/serverpython-dotenv- Environment variable management
Step 4: Verify Installation
python -c "from langgraph_agent import AgentFactory; print('β
Installation successful!')"
If you see β
Installation successful!, you're ready to proceed!
Configuration
Step 1: Understand the Configuration File
The agent uses langgraph_agent/config.py to manage all settings. It reads from environment variables in your .env file.
Step 2: Create/Update .env File
Navigate to the project root:
cd /Users/jacobbinder/Desktop/hackathon/hackathon_draft
Edit .env file (create if it doesn't exist):
nano .env # or use your favorite editor
Step 3: Add Required Variables
Minimal Configuration (Classifier Only):
# OpenAI API Key (Required)
OPENAI_API_KEY=sk-proj-your-openai-api-key-here
# Modal Bird Classifier (Required)
MODAL_MCP_URL=https://yourname--bird-classifier-mcp-web.modal.run/mcp
BIRD_CLASSIFIER_API_KEY=your-modal-api-key-here
# Agent Settings (Optional - uses defaults if not set)
LLM_MODEL=gpt-4o-mini
LLM_TEMPERATURE=0
Full Configuration (With eBird Server):
# OpenAI API Key
OPENAI_API_KEY=sk-proj-your-openai-api-key-here
# Modal Bird Classifier
MODAL_MCP_URL=https://yourname--bird-classifier-mcp-web.modal.run/mcp
BIRD_CLASSIFIER_API_KEY=your-modal-api-key-here
# eBird MCP Server
EBIRD_MCP_URL=http://localhost:8000/mcp
EBIRD_USE_STDIO=false
MCP_API_KEY=your-ebird-mcp-key # Optional, for production
# eBird API (if running eBird server)
EBIRD_API_KEY=your-ebird-api-key
# Agent Settings
LLM_MODEL=gpt-4o-mini
LLM_TEMPERATURE=0
AGENT_MAX_ITERATIONS=10
AGENT_TIMEOUT=120
Step 4: Save and Verify Configuration
Save the file and verify it loads correctly:
python -c "from langgraph_agent import AgentConfig; AgentConfig.validate(); AgentConfig.print_config()"
Expected Output: ```
Agent Configuration
Modal URL: https://yourname--bird-classifier-mcp-web.modal.run/mcp Modal API Key: your-modal-api-key... eBird URL: http://localhost:8000/mcp eBird Stdio: False LLM Model: gpt-4o-mini Temperature: 0.0
---
## Usage Guide
### Understanding Agent Types
The system provides **two types of agents**:
> **IMPORTANT FOR BEGINNERS:** The classifier agent (type 1) works immediately after installation. The multi-server agent (type 2) requires the eBird server to be running first. See [eBird Server Setup](#ebird-server-setup) below.
#### 1. Classifier Agent (Simple)
- **Purpose:** Identify birds from images
- **Tools:** 2 (classify_from_url, classify_from_base64)
- **Best for:** Quick bird identification
- **MCP Servers:** Modal classifier only
#### 2. Multi-Server Agent (Advanced)
- **Purpose:** Full bird identification + data exploration
- **Tools:** 9 (2 from Modal + 7 from eBird)
- **Best for:** Conversational bird exploration
- **MCP Servers:** Modal classifier + eBird data
---
### Visual Guide: Which Agent Type Do I Use?
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β CLASSIFIER AGENT (Simple) β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€ β β β What you need: β β β Modal MCP URL (from Phase 1) β β β Modal API Key β β β OpenAI API Key β β β eBird server (NOT needed) β β β β What it can do: β β β Identify birds from image URLs β β β Identify birds from base64 images β β β Find bird sightings near locations β β β Get eBird hotspot information β β β β How to use: β β python -m langgraph_agent demo β β python langgraph_agent/test_agent.py β β β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β MULTI-SERVER AGENT (Advanced) β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€ β β β What you need: β β β Modal MCP URL β β β Modal API Key β β β OpenAI API Key β β β eBird API Key β β β eBird MCP server running (see setup guide below) β β β β What it can do: β β β Everything from Classifier Agent + β β β Find recent bird sightings near any location β β β Discover birding hotspots β β β Get notable sightings β β β Search for specific species β β β Conversational memory β β β β How to use: β β 1. Start eBird server: python ebird_tools.py --http --port 8000β β 2. python -m langgraph_agent interactive β β 3. python langgraph_agent/test_agent.py multi β β β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
---
## eBird Server Setup
### When Do You Need the eBird Server?
**You DON'T need eBird server for:**
- β
`python -m langgraph_agent demo` (classifier only)
- β
`python langgraph_agent/test_agent.py` (basic classifier tests)
- β
Creating classifier agents in your own code
**You DO need eBird server for:**
- β `python -m langgraph_agent interactive` (multi-server chat)
- β `python langgraph_agent/test_agent.py multi` (multi-server tests)
- β Creating multi-server agents in your own code
- β Gradio app (`app.py`)
---
### Two Ways to Run eBird Server
#### **Option 1: stdio (Recommended for Local Development)** β
**Advantages:**
- β
No manual server management
- β
Agent auto-starts/stops server
- β
One terminal window only
- β
Simpler for beginners
**Setup:**
1. **Add to .env:**
```bash
EBIRD_USE_STDIO=true
EBIRD_API_KEY=your-ebird-api-key-here
- That's it! Just run your agent:
python -m langgraph_agent interactive # eBird server starts automatically!
Option 2: HTTP (For Production/Manual Control)
Advantages:
- β Better for production deployment
- β One server serves multiple agents
- β More control over server lifecycle
Setup:
Configure .env:
EBIRD_USE_STDIO=false EBIRD_MCP_URL=http://localhost:8000/mcp EBIRD_API_KEY=your-ebird-api-key-hereTerminal 1 - Start eBird Server:
cd /Users/jacobbinder/Desktop/hackathon/hackathon_draft source ../.venv-hackathon/bin/activate python ebird_tools.py --http --port 8000Verify server (new terminal):
curl http://localhost:8000/health # Should return: {"status": "ok"}Terminal 2 - Run Agent:
cd /Users/jacobbinder/Desktop/hackathon/hackathon_draft source ../.venv-hackathon/bin/activate python -m langgraph_agent interactive
β MCP_API_KEY Authentication Verification
YES, app.py is properly configured to use the eBird MCP API key. Here's the authentication flow:
Authentication Chain:
.env:11- DefinesMCP_API_KEY=test-api-keyconfig.py:21- Loads it asAgentConfig.MCP_API_KEYmcp_clients.py:63-71- Uses it for eBird HTTP authentication:# HTTP transport (server running separately) ebird_headers = {} if AgentConfig.MCP_API_KEY: ebird_headers["Authorization"] = f"Bearer {AgentConfig.MCP_API_KEY}" servers_config["ebird"] = { "transport": "streamable_http", "url": AgentConfig.EBIRD_MCP_URL, "headers": ebird_headers if ebird_headers else None }app.py:18- CallsAgentFactory.create_streaming_agent()β which callsMCPClientManager.create_multi_server_client()β which includes the authenticated eBird connection
Current setup (from .env):
EBIRD_USE_STDIO=false- Using HTTP modeMCP_API_KEY=test-api-key- Auth key for eBird server- Authentication header:
Authorization: Bearer test-api-key
Troubleshooting eBird Server
Problem: "Connection refused" or "All connection attempts failed"
Cause: eBird server is not running
Solution:
- Check if server terminal is still running
- Restart the server:
python ebird_tools.py --http --port 8000 - Verify with:
curl http://localhost:8000/health
Problem: "Address already in use" when starting server
Cause: Port 8000 is already taken
Solution 1: Stop the existing process
# Find process using port 8000
lsof -ti:8000
# Kill it
kill -9 $(lsof -ti:8000)
# Start server again
python ebird_tools.py --http --port 8000
Solution 2: Use a different port
# Start on different port
python ebird_tools.py --http --port 8001
# Update .env
EBIRD_MCP_URL=http://localhost:8001/mcp
Problem: "EBIRD_API_KEY not set"
Cause: Missing eBird API key in .env
Solution:
- Get API key from: https://ebird.org/api/keygen
- Add to
.envfile:EBIRD_API_KEY=your-key-here - Restart the eBird server
Quick Reference: Server vs No Server
| Command | Needs eBird Server? | What It Does |
|---|---|---|
python -m langgraph_agent demo |
β No | Test classifier with single image |
python -m langgraph_agent demo [url] |
β No | Test classifier with custom image |
python -m langgraph_agent interactive |
β Yes | Chat with full agent (classifier + eBird) |
python langgraph_agent/test_agent.py |
β No | Run classifier tests (2 images) |
python langgraph_agent/test_agent.py multi |
β Yes | Run multi-server tests (requires eBird) |
CLI Commands
Running the CLI
The agent package can be run as a Python module from the project root:
cd /Users/jacobbinder/Desktop/hackathon/hackathon_draft
python -m langgraph_agent [command] [options]
Command 1: Demo Mode (Default)
Purpose: Quick test with a single image classification
Usage:
# Use default test image
python -m langgraph_agent demo
# Or specify your own image URL
python -m langgraph_agent demo "https://example.com/bird.jpg"
What happens:
- Loads configuration from
.env - Prints config summary
- Creates classifier agent
- Classifies the bird image
- Prints results
Example Output: ```
Agent Configuration
Modal URL: https://jakeworkoutharder-dev--bird-classifier... Modal API Key: b1363a05f1ef1a8c... LLM Model: gpt-4o-mini Temperature: 0.0
[STATUS]: Connecting to Modal MCP server... [STATUS]: Loading MCP tools... [LOADED]: 2 tools - ['classify_from_base64', 'classify_from_url'] [STATUS]: Creating LangGraph agent... [SUCCESS]: Agent ready!
====================================================================== Testing bird classification... [IMAGE URL]: https://images.unsplash.com/photo-1445820200644...
[AGENT RESPONSE]: The bird in the image is a Grandala with a confidence score of 99.9%! What a beautiful bird!
[DEMO COMPLETE!]
---
### Command 2: Interactive Mode
> **β οΈ IMPORTANT:** This command requires the eBird server to be running first! See [eBird Server Setup](#ebird-server-setup) for step-by-step instructions.
**Purpose:** Chat with the agent in real-time
**Usage:**
```bash
python -m langgraph_agent interactive
What happens:
- Creates multi-server agent with memory
- Opens interactive chat
- Maintains conversation history
- Type queries, get responses
- Quit with 'exit' or 'quit'
Example Session: ```
Bird Classification Agent - Interactive Mode
Commands: - Type 'quit' or 'exit' to end session - Paste image URLs to classify birds - Ask about bird locations, sightings, hotspots
[STATUS]: Connecting to Modal and eBird MCP servers... [STATUS]: Loading MCP tools... [LOADED]: 9 tools - ['classify_from_base64', 'classify_from_url', 'search_species', 'get_recent_sightings_nearby', ...] [STATUS]: Creating multi-server LangGraph agent... [SUCCESS]: Agent ready with all tools!
You: What bird is this? https://example.com/cardinal.jpg
Agent: This is a Northern Cardinal with 98.7% confidence! These beautiful red birds are common across North America and known for their bright plumage and distinctive crest.
You: Where can I see them near Boston?
Agent: Northern Cardinals have been spotted recently near Boston! Here are some locations:
- Mount Auburn Cemetery (15 sightings this week)
- Arnold Arboretum (12 sightings)
- Fresh Pond Reservation (8 sightings)
You: quit
Goodbye! Happy birding!
**Key Features:**
- π§ **Remembers conversation** - References previous messages
- π§ **Auto-selects tools** - Agent decides which tools to use
- π¬ **Natural language** - Talk like you would to a person
---
## Programmatic Usage
### Import the Package
From any Python file in your project:
```python
import asyncio
from langgraph_agent import AgentFactory, AgentConfig
Example 1: Simple Bird Classification
import asyncio
from langgraph_agent import AgentFactory
async def classify_bird():
# Create classifier agent
agent = await AgentFactory.create_classifier_agent()
# Classify a bird
result = await agent.ainvoke({
"messages": [{
"role": "user",
"content": "What bird is this? https://example.com/bird.jpg"
}]
})
# Print response
print(result["messages"][-1].content)
# Run
asyncio.run(classify_bird())
Example 2: Conversational Agent with Memory
import asyncio
from langgraph_agent import AgentFactory
async def conversation():
# Create agent with memory
agent = await AgentFactory.create_classifier_agent(with_memory=True)
# Thread ID for conversation tracking
config = {"configurable": {"thread_id": "user_123"}}
# Turn 1: Classify bird
result1 = await agent.ainvoke({
"messages": [{
"role": "user",
"content": "Identify this: https://example.com/cardinal.jpg"
}]
}, config)
print("Turn 1:", result1["messages"][-1].content)
# Turn 2: Ask follow-up (agent remembers the bird!)
result2 = await agent.ainvoke({
"messages": [{
"role": "user",
"content": "What color is this bird?"
}]
}, config)
print("Turn 2:", result2["messages"][-1].content)
asyncio.run(conversation())
Output:
Turn 1: This is a Northern Cardinal with 98.7% confidence!
Turn 2: The Northern Cardinal is bright red, which we identified
in the previous image. Males are vibrant red while females are
brown with red highlights.
Example 3: Custom Configuration
import asyncio
from langgraph_agent import AgentFactory
async def custom_agent():
# Create agent with custom settings
agent = await AgentFactory.create_classifier_agent(
model_name="gpt-4o", # Use more powerful model
temperature=0.3, # Add creativity
with_memory=True # Enable conversation memory
)
# Use the agent...
result = await agent.ainvoke({
"messages": [{
"role": "user",
"content": "Classify and tell me a fun fact about this bird: URL"
}]
})
print(result["messages"][-1].content)
asyncio.run(custom_agent())
Example 4: Multi-Server Agent (Classifier + eBird)
import asyncio
from langgraph_agent import AgentFactory
async def multi_server_agent():
# Create agent with both MCP servers
agent = await AgentFactory.create_multi_server_agent(
with_memory=True
)
config = {"configurable": {"thread_id": "session_1"}}
# Agent can now use 9 tools across 2 servers!
result = await agent.ainvoke({
"messages": [{
"role": "user",
"content": "What bird is this? Where can I see it near NYC? https://example.com/bird.jpg"
}]
}, config)
print(result["messages"][-1].content)
asyncio.run(multi_server_agent())
What the agent does:
- Uses
classify_from_urlto identify the bird - Uses
search_speciesto get species code - Uses
get_recent_sightings_nearbyto find sightings near NYC - Formats response with all information
Example 5: Check Configuration
from langgraph_agent import AgentConfig
# Print current configuration
AgentConfig.print_config()
# Access specific values
print(f"Using model: {AgentConfig.DEFAULT_MODEL}")
print(f"Modal URL: {AgentConfig.MODAL_MCP_URL}")
# Validate configuration
try:
AgentConfig.validate()
print("β
Configuration is valid!")
except ValueError as e:
print(f"β Configuration error: {e}")
Advanced Configuration
Changing the LLM Model
Edit .env file:
# Use GPT-4o (more powerful, slower, more expensive)
LLM_MODEL=gpt-4o
# Use GPT-4o-mini (faster, cheaper, default)
LLM_MODEL=gpt-4o-mini
# Use GPT-3.5 (cheapest, fastest, less capable)
LLM_MODEL=gpt-3.5-turbo
Or override in code:
agent = await AgentFactory.create_classifier_agent(
model_name="gpt-4o" # Override default
)
Adjusting Temperature
Temperature controls creativity:
0.0= Deterministic, factual (recommended for classification)0.5= Balanced1.0= Creative, varied responses
In .env:
LLM_TEMPERATURE=0.3
Or in code:
agent = await AgentFactory.create_classifier_agent(
temperature=0.3
)
Configuring eBird Server Transport
Option 1: HTTP Transport (default)
EBIRD_USE_STDIO=false
EBIRD_MCP_URL=http://localhost:8000/mcp
Start eBird server separately:
python ebird_tools.py --http --port 8000
Option 2: Stdio Transport
EBIRD_USE_STDIO=true
Agent will automatically start eBird server as subprocess.
Troubleshooting
Issue 1: "ModuleNotFoundError: No module named 'langgraph_agent'"
Cause: Package not in Python path
Solution:
# Make sure you're in the correct directory
cd /Users/jacobbinder/Desktop/hackathon/hackathon_draft
# Verify package exists
ls langgraph_agent/
# Try import again
python -c "from langgraph_agent import AgentFactory"
Issue 2: "ValueError: MODAL_MCP_URL not set in .env"
Cause: Missing or incorrect environment variables
Solution:
# Check .env file exists
ls -la .env
# Verify contents
cat .env | grep MODAL_MCP_URL
# Make sure it's set correctly (no quotes, no trailing slash)
MODAL_MCP_URL=https://your-url-here/mcp
Issue 3: "401 Unauthorized" when calling Modal
Cause: Incorrect API key
Solution:
# Verify API key in .env matches Modal secret
cat .env | grep BIRD_CLASSIFIER_API_KEY
# Get correct key from Modal
modal secret list
# Update .env with correct key
Issue 4: Agent is slow (30+ seconds per request)
Cause: Using expensive model or large conversation history
Solution 1: Use faster model
# In .env
LLM_MODEL=gpt-4o-mini # Instead of gpt-4o
Solution 2: Limit conversation history
from langchain_core.messages.utils import trim_messages
# Trim to last 10 messages before invoking
messages = trim_messages(state["messages"], max_tokens=2000)
Issue 5: "Too many tools loaded" or incorrect tool count
Cause: Wrong server configuration
Expected tool counts:
- Classifier only: 2 tools
- Multi-server: 9 tools (2 + 7)
Solution:
# Check which agent type you're creating
# For classifier only:
agent = await AgentFactory.create_classifier_agent()
# For multi-server:
agent = await AgentFactory.create_multi_server_agent()
Testing
Component Testing (Individual Parts)
Before testing agents, verify each component works individually.
Test Modal Classifier (Step 1)
Verify Modal deployment:
modal app list
Test with curl:
curl -X POST \
-H "Content-Type: application/json" \
-H "Accept: application/json, text/event-stream" \
-H "X-API-Key: YOUR_API_KEY" \
-d '{
"jsonrpc": "2.0",
"id": 1,
"method": "initialize",
"params": {
"protocolVersion": "2024-11-05",
"capabilities": {},
"clientInfo": {
"name": "test",
"version": "1.0"
}
}
}' \
https://yourname--bird-classifier-mcp-web.modal.run/mcp
β Success: Modal responds with JSON result
Agent Testing (Combined System)
Test 1: Basic Classifier Agent (No eBird Server Needed)
This test works immediately after installation - no eBird server required!
cd /Users/jacobbinder/Desktop/hackathon/hackathon_draft
# Activate virtual environment
source ../.venv-hackathon/bin/activate
# Test basic classifier agent (2 bird images)
python langgraph_agent/test_agent.py
Expected Output: ```
Test Suite: Basic Classifier Agent
[STATUS]: Connecting to Modal MCP server... [STATUS]: Loading MCP tools... [LOADED]: 2 tools - ['classify_from_base64', 'classify_from_url'] [STATUS]: Creating LangGraph agent... [SUCCESS]: Agent ready!
[TEST 1/2]
[RESULT]: The bird in the image is a Jandaya Parakeet! I have a high confidence score of 99.84% in this identification.
[TEST 2/2]
[RESULT]: The bird in the image is identified as a Grandala with a confidence score of 99.9%! What a beautiful bird!
[ALL TESTS COMPLETE!]
---
#### Test 2: Multi-Server Agent (Requires eBird Server)
This test requires the eBird server to be running first!
**Step-by-Step:**
1. **Terminal 1 - Start eBird Server:**
```bash
cd /Users/jacobbinder/Desktop/hackathon/hackathon_draft
source ../.venv-hackathon/bin/activate
# Start eBird server (keep this running)
python ebird_tools.py --http --port 8000
- Terminal 2 - Run Multi-Server Test:
cd /Users/jacobbinder/Desktop/hackathon/hackathon_draft source ../.venv-hackathon/bin/activate # Run multi-server test python langgraph_agent/test_agent.py multi
Expected Output: ```
Test Suite: Multi-Server Agent
[STATUS]: Connecting to Modal and eBird servers... [STATUS]: Loading MCP tools... [LOADED]: 9 tools available
- classify_from_base64
- classify_from_url
- search_species
- get_recent_sightings_nearby
- get_notable_sightings_nearby
- get_recent_checklists_nearby
- get_regional_statistics
- list_hotspots_nearby
- get_checklist_details
[TEST 1]: Classify bird from URL
[RESULT]: The bird in the image is a Jandaya Parakeet with 99.84% confidence!
[TEST 2]: Follow-up question (tests memory)
[RESULT]: You can see Jandaya Parakeets near Boston (42.36, -71.06):
- Arnold Arboretum (3 sightings this month)
- Mount Auburn Cemetery (2 sightings) Note: These are exotic birds, not native to the area.
[ALL TESTS COMPLETE!]
**What This Tests:**
- β
Multi-server connection (Modal + eBird)
- β
Tool integration (9 tools from both servers)
- β
Conversation memory (follow-up question)
- β
Cross-server reasoning (classify β location lookup)
---
## Examples
### Complete Example: Bird Identification App
```python
"""
Simple bird identification CLI app
"""
import asyncio
from langgraph_agent import AgentFactory
async def main():
print("π¦ Bird Identification App")
print("=" * 50)
# Create agent
print("\n[1/3] Initializing agent...")
agent = await AgentFactory.create_classifier_agent()
# Get image URL from user
print("\n[2/3] Waiting for input...")
image_url = input("Enter bird image URL: ").strip()
if not image_url:
print("β No URL provided")
return
# Classify
print("\n[3/3] Classifying...")
result = await agent.ainvoke({
"messages": [{
"role": "user",
"content": f"Identify this bird: {image_url}"
}]
})
# Display result
print("\n" + "=" * 50)
print("RESULT:")
print("=" * 50)
print(result["messages"][-1].content)
if __name__ == "__main__":
asyncio.run(main())
Run it:
python my_bird_app.py
Quick Reference
Import Patterns
# Import everything
from langgraph_agent import (
AgentFactory,
AgentConfig,
create_bird_agent,
create_multi_agent,
MCPClientManager,
get_prompt_for_agent_type
)
# Or import as needed
from langgraph_agent import AgentFactory
Create Agent (Simple)
# Minimal
agent = await AgentFactory.create_classifier_agent()
# With memory
agent = await AgentFactory.create_classifier_agent(with_memory=True)
# Custom model
agent = await AgentFactory.create_classifier_agent(
model_name="gpt-4o",
temperature=0.3
)
Create Agent (Multi-Server)
# Default (with memory)
agent = await AgentFactory.create_multi_server_agent()
# Custom settings
agent = await AgentFactory.create_multi_server_agent(
model_name="gpt-4o",
temperature=0.5,
with_memory=True
)
Invoke Agent
# Single turn
result = await agent.ainvoke({
"messages": [{"role": "user", "content": "Your message"}]
})
# With memory (multi-turn)
config = {"configurable": {"thread_id": "session_1"}}
result = await agent.ainvoke({
"messages": [{"role": "user", "content": "Your message"}]
}, config)
Summary
β
Installation: Install dependencies, set up .env
β
Configuration: Edit .env with API keys and URLs
β
Usage: CLI commands or Python imports
β
CLI: demo for testing (no eBird), interactive for chat (needs eBird)
β
Programmatic: Import AgentFactory, create agents, invoke
β
Testing: Run test_agent.py to verify setup
Ready to build? Start with these commands (no eBird server needed):
# Test the classifier
python -m langgraph_agent demo
# Run basic tests
python langgraph_agent/test_agent.py
Want multi-server features? See eBird Server Setup above.
Questions? Check the HuggingFace Deployment Guide or Phase 3 Documentation.
Happy birding! π¦