Documentation Index
Fetch the complete documentation index at: https://docs.hellofriday.ai/llms.txt
Use this file to discover all available pages before exploring further.
Class: Llm
Methods
generate()
Generate text from an LLM. Parameters:| Parameter | Type | Required | Description |
|---|---|---|---|
messages | list[dict[str, str]] | Yes | Conversation messages with role and content |
model | str | None | No | Model identifier (resolution cascade applies) |
max_tokens | int | None | No | Maximum tokens to generate |
temperature | float | None | No | Sampling temperature (0.0 - 2.0) |
provider_options | dict | None | No | Provider-specific options passthrough |
LlmResponse
Raises: LlmError on generation failure
Example:
generate_object()
Generate structured output conforming to a JSON Schema. Parameters:| Parameter | Type | Required | Description |
|---|---|---|---|
messages | list[dict[str, str]] | Yes | Conversation messages |
schema | dict | Yes | JSON Schema for output structure |
model | str | None | No | Model identifier |
max_tokens | int | None | No | Maximum tokens |
temperature | float | None | No | Sampling temperature |
provider_options | dict | None | No | Provider-specific options |
LlmResponse with .object populated
Raises: LlmError on generation failure
Example:
Model Resolution
Resolution cascade (first match wins):- Fully qualified per-call —
model="anthropic:claude-sonnet-4-6"used directly - Bare per-call + decorator default —
model="claude-sonnet-4-6"+@agent(llm={"provider": "anthropic"})resolved to full identifier - Decorator default only —
@agent(llm={"provider": "anthropic", "model": "claude-sonnet-4-6"})used when no model specified - Error — No model specified and no decorator default
LlmResponse
Error Handling
Provider Options
Pass provider-specific configuration:systemPrompt— Either{"type": "preset", "preset": "..."}or{"type": "custom", "content": "..."}effort—"low","medium","high"fallbackModel— Model to use if primary failsrepo— Repository to clone and work in
Message Format
system, user, assistant
Limitations
- No streaming responses — Full response returned at once; streaming is not yet supported
- 5MB implicit limit — Via platform constraints on response size
Why Host-Managed?
Friday agents run as Python subprocesses with only the standard library andfriday_agent_sdk available — packages like openai and anthropic are not installed in the agent environment. Host capabilities provide the same functionality while Friday manages API keys, rate limits, and provider routing centrally.
See Also
How to Call LLMs
Task-oriented guide
AgentContext
Parent context object

