Documentation Index Fetch the complete documentation index at: https://docs.hellofriday.ai/llms.txt
Use this file to discover all available pages before exploring further.
Basic progress emission
from friday_agent_sdk import agent, ok
@agent ( id = "long-task" , version = "1.0.0" , description = "Takes a while" )
def execute ( prompt , ctx ):
ctx.stream.progress( "Starting analysis..." )
# Do work...
result = ctx.llm.generate( ... )
ctx.stream.progress( "Processing results..." )
# More work...
data = process(result.text)
ctx.stream.progress( "Complete!" )
return ok({ "data" : data})
Intent emission
Emit high-level intents for significant state changes:
ctx.stream.intent( "Analyzing repository structure" )
# Walk directory tree...
ctx.stream.intent( "Identifying issues" )
# Run analysis...
ctx.stream.intent( "Generating report" )
With Tool Context
Associate progress with specific tools:
ctx.stream.progress( "Fetching repository data" , tool_name = "GitHub" )
# Call GitHub MCP tools...
ctx.stream.progress( "Analyzing code patterns" , tool_name = "Analyzer" )
# LLM analysis...
ctx.stream.progress( "Creating summary" , tool_name = "Reporter" )
Real Example: Multi-Phase Agent
from friday_agent_sdk import agent, ok, AgentExtras
@agent ( id = "analyzer" , version = "1.0.0" , description = "Multi-phase analysis" )
def execute ( prompt , ctx ):
# Phase 1: Extract parameters
ctx.stream.progress( "Parsing request" )
params = extract_params(prompt)
# Phase 2: LLM preprocessing
ctx.stream.progress( "Running initial analysis" , tool_name = "LLM" )
analysis = ctx.llm.generate(
messages = [{ "role" : "user" , "content" : f "Analyze: { params } " }],
model = "claude-haiku-4-5" ,
)
# Phase 3: Tool calls
ctx.stream.progress( "Fetching related data" , tool_name = "GitHub" )
issues = ctx.tools.call( "search_issues" , { "query" : params[ "query" ]})
# Phase 4: Synthesis
ctx.stream.progress( "Synthesizing results" , tool_name = "Synthesizer" )
result = synthesize(analysis.text, issues)
ctx.stream.progress( "Analysis complete" )
return ok({
"summary" : result[ "summary" ],
"recommendations" : result[ "recommendations" ],
})
When to Emit
Emit progress when:
Starting a distinct phase of work
Before expensive operations (LLM calls, HTTP requests)
After completing significant milestones
When handling fallback scenarios (“Retrying with different model…”)
Do not emit:
In tight loops (debounce or batch instead)
For trivial operations (< 100ms)
Excessively verbose detail (“Step 1 of 50”, “Step 2 of 50”…)
Emission during LLM calls
Emit progress before expensive operations - ctx.stream.progress() is fire-and-forget over NATS and does not block the handler:
ctx.stream.progress( "Starting LLM call..." )
# progress event already sent to connected clients
result = ctx.llm.generate(messages, model = "claude-sonnet-4-6" )
ctx.stream.progress( "LLM complete, processing..." )
Raw Event Emission
For custom event types, use emit():
ctx.stream.emit( "custom-event" , { "phase" : "validation" , "count" : 42 })
The data parameter accepts either a dict (JSON-serialized) or string.
API Reference: ctx.stream Full stream capability API reference
How Friday Agents Work The subprocess model and host capabilities