Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.hellofriday.ai/llms.txt

Use this file to discover all available pages before exploring further.

Build a text analysis agent that accepts a topic and returns a structured analysis with a summary, key points, and a sentiment rating. It demonstrates:
  • The @agent decorator for metadata
  • Calling an LLM through ctx.llm.generate_object() for structured output
  • Returning structured data with ok()

Prerequisites

  • Friday Studio installed and running — daemon reachable at http://localhost:18080
  • Python 3.11+ and uv (for IDE support)
  • A text editor (VS Code recommended)

Step 1: Set up IDE support

You need the SDK installed locally for autocomplete and type checking. Create a Python environment and install the SDK:
# 1. Clone the SDK somewhere (one-time)
git clone git@github.com:friday-platform/agent-sdk.git ~/agent-sdk

# 2. Create a venv in your agent project directory
mkdir -p ~/my-agents && cd ~/my-agents
uv venv
source .venv/bin/activate
uv pip install -e ~/agent-sdk/packages/python
Create .vscode/settings.json in your agent directory:
{
  "python.defaultInterpreterPath": "${workspaceFolder}/.venv/bin/python",
  "python.analysis.typeCheckingMode": "basic"
}
Reload VS Code after creating the file. Cmd+click on AgentContext in the next step should jump to the SDK definition.

Step 2: Create the agent file

Create a directory for your agent anywhere on your machine:
mkdir -p ~/my-agents/text-analyzer
This agent accepts text input and returns structured analysis. It uses:
  • A @dataclass to define the output shape
  • ctx.llm.generate_object() to request structured JSON from an LLM
  • The host’s LLM provider — your code never handles API keys
Write ~/my-agents/text-analyzer/agent.py:
from dataclasses import dataclass
from friday_agent_sdk import agent, ok, AgentContext
@dataclass
class AnalysisResult:
    summary: str
    key_points: list[str]
    sentiment: str  # "positive", "negative", or "neutral"


@agent(
    id="text-analyzer",
    version="1.0.0",
    description="Analyzes text and returns structured summary, key points, and sentiment",
)
def execute(prompt: str, ctx: AgentContext):
    """Analyze the user's text using an LLM."""

    # Define the schema for structured output
    output_schema = {
        "type": "object",
        "properties": {
            "summary": {"type": "string"},
            "key_points": {
                "type": "array",
                "items": {"type": "string"},
            },
            "sentiment": {
                "type": "string",
                "enum": ["positive", "negative", "neutral"],
            },
        },
        "required": ["summary", "key_points", "sentiment"],
        "additionalProperties": False,
    }

    # Call Friday's LLM provider — your agent never sees the API key
    analysis_prompt = f"""Analyze the following text.

Text:
{prompt}

Provide a concise summary, 3-5 key points, and an overall sentiment."""

    result = ctx.llm.generate_object(
        messages=[{"role": "user", "content": analysis_prompt}],
        schema=output_schema,
        model="anthropic:claude-haiku-4-5",  # Fast and cost-effective
    )

    # result.object contains the parsed JSON matching our schema
    return ok(result.object)

Step 3: Build and test

Restart the platform to build your agent:
curl -X POST http://localhost:18080/api/agents/register \
  -H "Content-Type: application/json" \
  -d '{"entrypoint": "/abs/path/to/agent.py"}'
The daemon compiles every agent in agents/ on startup. Verify the build succeeded:
friday logs --since 1m | grep -i "registered agent"
# Built agent text-analyzer@1.0.0 from source
Test your agent with curl against the playground API on port 15200:
curl -s -X POST http://localhost:15200/api/execute \
  -H 'Content-Type: application/json' \
  -d '{
    "agentId": "text-analyzer",
    "input": "The new feature shipped on time and customers report faster load times. Support tickets are down 40%."
  }' | jq .
The response streams as SSE events. After a moment you see the result:
{
  "summary": "Product launch successful with measurable performance improvements",
  "key_points": [
    "Feature shipped on schedule",
    "Load times significantly improved",
    "Support tickets decreased by 40%"
  ],
  "sentiment": "positive"
}
Try a different input:
curl -s -X POST http://localhost:15200/api/execute \
  -H 'Content-Type: application/json' \
  -d '{
    "agentId": "text-analyzer",
    "input": "The server crashed twice today. The database is throwing connection errors and the logs are incomprehensible."
  }' | jq .
Your agent classifies this as "sentiment": "negative".
Add --json for raw NDJSON output, useful for piping to jq:
friday agent exec text-analyzer -i "analyze this text" --json | jq .

Step 4: Iterate

Edit agents/text-analyzer/agent.py, then rebuild and test:
curl -X POST http://localhost:18080/api/agents/register \
  -H "Content-Type: application/json" \
  -d '{"entrypoint": "/abs/path/to/agent.py"}'
curl -s -X POST http://localhost:15200/api/execute \
  -H 'Content-Type: application/json' \
  -d '{"agentId": "text-analyzer", "input": "test your changes"}' | jq .
This cycle — edit, restart, test — is your development loop.
There’s a skill for that. The writing-friday-python-agents skill lets coding agents like Claude Code write and modify Friday agents directly — correct imports and proper capability calls.
Bump the version to keep old builds available for rollback:
@agent(
    id="text-analyzer",
    version="1.0.1",  # Bumped from 1.0.0
    description="Analyzes text with an LLM",
)
Both versions are stored, but Friday resolves text-analyzer to the latest semver version (1.0.1).

Step 5: Register in a workspace (optional)

To use your agent within a Friday workspace (for planner routing, signals, and multi-agent orchestration), add it to your workspace’s workspace.yml:
workspace.yml
agents:
  text-analyzer:
    type: user
    agent: "text-analyzer"
    description: "Analyses text and returns structured summary"
The type: user field tells Friday this is a custom Python agent. The agent: key must match the id in the @agent decorator. This step is not required for direct execution.

Advanced topics

For CI/CD pipelines or automation, register agents via the daemon API on port 18080:
curl -X POST http://localhost:18080/api/agents/register \
  -H "Content-Type: application/json" \
  -d '{"entrypoint": "/abs/path/to/agent.py"}' \
  | jq .
Error responses include the phase that failed (validate or write):
{
  "ok": false,
  "phase": "validate",
  "error": "description is required"
}
Agent not found after registering — Check that registration returned {"ok": true, ...}. Verify the agent ID matches what you pass to the execute command. Run friday agent list --user to see all registered agents.Build fails with syntax errors — The SDK uses pure Python dataclasses — no Pydantic. Ensure your type hints use standard library types only.Build returns 400 — Your @agent decorator metadata failed validation. Required fields: id, version, description.ImportError on third-party packages — Only the Python standard library and friday_agent_sdk are available in agent processes. You cannot import requests or import openai. Use ctx.http and ctx.llm instead.Credentials not working — Verify your .env file contains ANTHROPIC_API_KEY and restart the platform: curl -X POST http://localhost:18080/api/agents/register \ -H "Content-Type: application/json" \ -d '{"entrypoint": "/abs/path/to/agent.py"}'.

Next steps

Call LLMs

Different models, structured output, and error handling.

Make HTTP requests

Fetch data from external APIs.

Use MCP tools

Invoke GitHub, databases, and other MCP servers.

How agents work

The subprocess model and host capabilities.