Building Your First LangChain Agent in 10 Minutes

If you’ve been curious about AI agents but felt overwhelmed by the complexity, you’re in the right place. In this tutorial, we’ll build a functional LangChain agent from scratch in just 10 minutes. No prior experience with LangChain required—just basic Python knowledge.

What is a LangChain Agent?

Before we dive in, let’s understand what we’re building. A LangChain agent is an AI system that can reason about problems, decide which tools to use, and take actions to solve tasks. Think of it as giving your AI the ability to actually do things rather than just chat.

For example, an agent can:

  • Search the web for current information
  • Perform calculations
  • Query databases
  • Chain multiple actions together to solve complex problems

Prerequisites

You’ll need:

  • Python 3.8 or higher installed
  • An OpenAI API key (sign up at platform.openai.com if you don’t have one)

Step 1: Install Required Libraries

First, open your terminal and install LangChain and its dependencies:

pip install langchain langchain-openai langchain-community --break-system-packages

Step 2: Set Up Your Environment

Create a new Python file called my_first_agent.py and add your OpenAI API key. For security, we’ll use environment variables:

import os
from langchain.agents import AgentExecutor, create_react_agent
from langchain_openai import ChatOpenAI
from langchain.tools import Tool
from langchain import hub

# Set your OpenAI API key
os.environ["OPENAI_API_KEY"] = "your-api-key-here"

Important: Never hardcode API keys in production. Use environment variables or a secrets manager instead.

Step 3: Create Your First Tool

Tools are functions that your agent can use. Let’s create a simple calculator tool:

def calculator(expression: str) -> str:
    """Evaluate a mathematical expression."""
    try:
        result = eval(expression)
        return f"The result is: {result}"
    except Exception as e:
        return f"Error: {str(e)}"

# Wrap it as a LangChain tool
calculator_tool = Tool(
    name="Calculator",
    func=calculator,
    description="Useful for performing mathematical calculations. Input should be a valid Python mathematical expression like '25 * 4' or '100 / 3'."
)

The description is crucial—it tells the agent when and how to use this tool.

Step 4: Add a Search Tool (Optional but Powerful)

Let’s add another tool that searches for information. We’ll create a simple mock search for this tutorial:

def search(query: str) -> str:
    """Search for information (mock implementation)."""
    # In a real application, you'd integrate with Google, DuckDuckGo, etc.
    responses = {
        "python": "Python is a high-level programming language known for its simplicity and readability.",
        "langchain": "LangChain is a framework for developing applications powered by language models.",
    }
    return responses.get(query.lower(), f"No information found for '{query}'")

search_tool = Tool(
    name="Search",
    func=search,
    description="Useful for finding information about topics. Input should be a search query."
)

Step 5: Initialize the Language Model

Now let’s set up the LLM that will power our agent:

llm = ChatOpenAI(
    model="gpt-3.5-turbo",
    temperature=0  # Lower temperature for more focused responses
)

We’re using GPT-3.5-turbo for speed and cost-effectiveness, but you can use GPT-4 for more complex reasoning.

Step 6: Create the Agent

LangChain provides pre-built agent templates. We’ll use the ReAct (Reasoning + Acting) pattern:

# Get the ReAct prompt template from LangChain hub
prompt = hub.pull("hwchase17/react")

# Combine our tools
tools = [calculator_tool, search_tool]

# Create the agent
agent = create_react_agent(
    llm=llm,
    tools=tools,
    prompt=prompt
)

# Create an executor to run the agent
agent_executor = AgentExecutor(
    agent=agent,
    tools=tools,
    verbose=True,  # This shows the agent's thinking process
    handle_parsing_errors=True
)

Step 7: Test Your Agent

Now for the exciting part—let’s see it in action:

# Example 1: Simple calculation
response = agent_executor.invoke({
    "input": "What is 157 multiplied by 89?"
})
print(response["output"])

# Example 2: Using multiple tools
response = agent_executor.invoke({
    "input": "What is LangChain? Also, calculate 2500 divided by 25."
})
print(response["output"])

# Example 3: Complex reasoning
response = agent_executor.invoke({
    "input": "If I have 15 apples and give away 40% of them, how many do I have left?"
})
print(response["output"])

Complete Working Code

Here’s the full code you can copy and run:

import os
from langchain.agents import AgentExecutor, create_react_agent
from langchain_openai import ChatOpenAI
from langchain.tools import Tool
from langchain import hub

# Set your API key
os.environ["OPENAI_API_KEY"] = "your-api-key-here"

# Define tools
def calculator(expression: str) -> str:
    try:
        result = eval(expression)
        return f"The result is: {result}"
    except Exception as e:
        return f"Error: {str(e)}"

def search(query: str) -> str:
    responses = {
        "python": "Python is a high-level programming language.",
        "langchain": "LangChain is a framework for LLM applications.",
    }
    return responses.get(query.lower(), f"No information found for '{query}'")

calculator_tool = Tool(
    name="Calculator",
    func=calculator,
    description="Useful for mathematical calculations. Input: valid Python expression."
)

search_tool = Tool(
    name="Search",
    func=search,
    description="Useful for finding information. Input: search query."
)

# Initialize LLM
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)

# Create agent
prompt = hub.pull("hwchase17/react")
tools = [calculator_tool, search_tool]
agent = create_react_agent(llm=llm, tools=tools, prompt=prompt)
agent_executor = AgentExecutor(
    agent=agent,
    tools=tools,
    verbose=True,
    handle_parsing_errors=True
)

# Test it
response = agent_executor.invoke({
    "input": "What is 157 multiplied by 89?"
})
print(response["output"])

Understanding the Output

When you run this with verbose=True, you’ll see the agent’s thought process:

> Entering new AgentExecutor chain...
I need to multiply two numbers
Action: Calculator
Action Input: 157 * 89
Observation: The result is: 13973
Thought: I now know the final answer
Final Answer: 13973

The agent:

  1. Reasoned about what it needed to do
  2. Chose the right tool (Calculator)
  3. Used the tool correctly
  4. Returned the answer

Next Steps: Making Your Agent More Powerful

Now that you have a working agent, here are some ways to enhance it:

Add Real Web Search: Replace the mock search with actual web search using DuckDuckGo or SerpAPI.

Create Custom Tools: Build tools specific to your needs—database queries, API calls, file operations.

Memory: Add conversation memory so your agent remembers previous interactions.

Error Handling: Improve error handling for production use.

Multiple Agents: Create specialized agents that work together on complex tasks.

Common Pitfalls to Avoid

Vague Tool Descriptions: The agent relies on descriptions to choose tools. Be specific about what each tool does and what input format it expects.

No Guardrails: For production, add input validation to your tools to prevent unexpected behavior.

Ignoring Costs: Each agent call uses tokens. Monitor your API usage, especially when testing.

Over-Complicating: Start simple. Add complexity only when needed.

Troubleshooting

“Module not found” errors: Make sure you installed all packages with the correct package names.

Agent not using tools: Check your tool descriptions—they might not be clear enough for the LLM to understand when to use them.

Parsing errors: Set handle_parsing_errors=True in the AgentExecutor to gracefully handle these.

Conclusion

Congratulations! You’ve just built your first LangChain agent. In just a few lines of code, you created an AI system that can reason, use tools, and solve problems autonomously.

This is just the beginning. LangChain agents can be extended to handle incredibly complex workflows—from customer service automation to research assistants to data analysis pipelines.

The key is starting simple, understanding the fundamentals, and iterating from there. Now that you understand the basics, you’re ready to explore more advanced agent patterns and build something amazing.


Want to learn more about AI, Python, and cutting-edge tech? Head over to ToolTechSavvy.com where we cover everything from AI agents and machine learning to practical tutorials and industry insights. Whether you’re a beginner or an experienced developer, you’ll find valuable content to level up your tech skills.

Leave a Comment

Your email address will not be published. Required fields are marked *