Welcome to Day 2 of the Task Helper Agent (Gemini Edition) beginner series!
Yesterday, you created your project structure. Today, you’ll bring your agent to life by connecting it to Google AI Studio and making your first real API call to Gemini.
By the end of this tutorial, you will:
- Create a free Gemini API key
- Install the required dependencies
- Wire up the
llm_client.pyfile - Successfully send your first AI request to Gemini
- Verify everything is working with a quick test script
Let’s jump in.
1. Create Your Free Gemini API Key
To connect your project to Google’s Gemini models, you’ll need an API key.
Step-by-step:
- Visit Google AI Studio
- Sign in with a Google account
- Go to the API Keys tab
- Click Create API Key
- Copy the key to a safe place
This is a developer API key, not your login password.
It only grants access to Gemini models — nothing else.
Add Your API Key to Your Environment
Your project will read the key from an environment variable called GEMINI_API_KEY.
macOS / Linux
export GEMINI_API_KEY="your-api-key-here"
Windows PowerShell
setx GEMINI_API_KEY "your-api-key-here"
Restart your terminal afterward so the variable loads correctly.
2. Install Dependencies
Your requirements.txt already contains the necessary library:
google-genai>=0.1.0
Install it by running:
pip install -r requirements.txt
This installs the official Python SDK for Gemini.
3. Wire Up llm_client.py Using google-genai
Now let’s update your Gemini client wrapper to make your first real text generation call.
Open:
src/llm_client.py
Replace everything with this:
from google import genai
from typing import Optional
from config import MODEL_NAME
def _get_client() -> genai.Client:
"""
Create a Gemini client.
It will automatically pick up GEMINI_API_KEY from the environment.
"""
return genai.Client() # uses GEMINI_API_KEY env var
def generate_text(prompt: str, model: Optional[str] = None) -> str:
"""
Send a simple text prompt to Gemini and return the response text.
"""
client = _get_client()
model_name = model or MODEL_NAME
response = client.models.generate_content(
model=model_name,
contents=prompt,
)
# response.text is a helper that concatenates the parts
return response.text
What this does:
- Initializes a Gemini client
- Sends your prompt to
gemini-2.5-flash(or model of your choice) - Returns the plain text output
This is the heart of your AI agent — the connection to the LLM.
4. Create a Quick Test Script
Let’s verify everything works before integrating it into your agent.
Create a new file:
src/test_gemini.py
Add this code:
from llm_client import generate_text
def main():
reply = generate_text("Say hello in one short sentence.")
print("Gemini says:", reply)
if __name__ == "__main__":
main()
This keeps your test isolated and clean.
5. Run the Test
Run the script with:
python -m src.test_gemini
If everything is set up correctly, you should see something like:
Gemini says: Hello! Nice to meet you.
🎉 Congratulations — you just made your first real Gemini API call!
Your agent now has a working AI brain.
6. Troubleshooting (Common Issues)
Even advanced developers run into these on Day 2. Here’s how to fix them quickly:
❌ ModuleNotFoundError: google
Fix:
pip install google-genai
Make sure you’re installing it in the same Python environment running your script.
❌ Missing GEMINI_API_KEY
Your system cannot find the key.
Fix:
- Re-run the
exportorsetxcommand - Restart your terminal
- Run:
echo $GEMINI_API_KEY # macOS/Linux
echo %GEMINI_API_KEY% # Windows
❌ Empty Response / Timeout
Try testing your internet connection or regenerate your API key.
Preview of Day 3 — Building Your First Planning Agent
Now that Gemini is working, tomorrow we will:
- Build the first real method in your agent:
plan_task() - Write a structured planning prompt
- Have Gemini break down goals into actionable steps
- Produce your first real task plan
By the end of Day 3, your Task Helper Agent will generate full plans from goals, not just stubs.



