Two kinds of prompts
System Prompts
Added to an AI model step inside an agent. Sets the model’s persona, constraints, reasoning style, and access to runtime context. Evaluated before the user message every time the step runs.
User Prompts
Templates surfaced in chat-focused agent deployments (see Skills). Controls how the user’s message is structured before it reaches the model. Useful for scaffolding multi-field inputs.
The segment architecture
A prompt is not a single text block. It is an ordered list of segments that are joined top-to-bottom and sent to the model as one assembled string. This lets you compose reusable building blocks into agent-specific instructions without copy-pasting.Consolidated Prompt View
A live, read-only assembly of all segments in order — exactly as the model will receive them at runtime. Use this to sanity-check spacing, flow, and variable placement before publishing.
Building Section
Where you author and arrange the segments that make up this agent’s prompt. Drag handles let you reorder; each segment can be expanded, collapsed, or removed independently.
Segment types in depth
Understanding the difference between custom and shared segments — and when to use each — is foundational to building maintainable agents.- Custom (agent-specific)
A custom segment belongs entirely to one agent. It is:
- Authored inline in the building section — no separate editor or admin page
- Versioned with the agent — when you publish a new version of the agent, the current state of all custom segments is captured as part of that agent version
- Invisible to other agents — it does not appear in the shared library and cannot be referenced elsewhere
- Freely editable at any time — no coordination with other teams required
IsAgentSpecific: true internally. This flag excludes them from the workspace prompt library so they never appear in another agent’s segment picker.When to use:- You are building a new agent and iterating quickly
- The instructions are specific to this agent’s domain, persona, or step configuration
- You are prototyping something you may later promote to a shared segment
- The content is genuinely unique and will never be needed elsewhere
Versioning
Versioning is how Airia ensures prompt changes are intentional, traceable, and safe to roll out incrementally. The model differs between custom and shared segments.Custom segment versioning
Custom segments do not have their own independent version history. Their content is captured as part of the agent version each time you publish. To see how a custom segment looked in a previous release, view the corresponding agent version in the agent’s version history panel. This means:- There is no
v1,v2concept for a custom segment in isolation - Rollback means rolling back the entire agent version
- You can freely edit a custom segment between publishes without creating intermediate versions
Shared segment versioning
Shared segments have a first-class version lifecycle that is completely independent of any agent. How a new version is created: When you save a shared segment in the Prompts administration area, Airia compares the new content against the current latest version:| What changed | What happens |
|---|---|
| Only the change description | The description is updated on the current version. No new version is created. Version number does not increment. |
| The prompt content (any text change) | A new version is created. The version number increments by 1. The new version becomes IsLatest: true. The previous version’s IsLatest flag is set to false. |
| Pin mode | Behaviour |
|---|---|
| Specific version (e.g., v3) | The agent always uses exactly that version. New versions published to the shared segment have no effect until you manually update the pin. |
| Latest | The agent always uses whatever version is currently marked IsLatest. Publishing a new version of the shared segment takes effect immediately for this agent on the next execution. |
Pinning to a specific version is safer for production agents. Pinning to
latest is convenient during development when a shared segment is actively being iterated.PromptVariables: parameterising shared segments
PromptVariables solve a specific problem: you want a shared segment that is reused across many agents, but each agent needs to supply slightly different values to it.
Rather than maintaining many near-identical shared segments, you declare placeholders inside the shared segment using {{ PromptVariables.name }}, and each agent that includes the segment supplies its own value for name.
Shared segment definition (in Prompts administration):
PromptVariables values are configured per-agent in the segment’s settings panel after you add it to the building section.
PromptVariables values are always strings. If you need conditional logic based on a PromptVariable, use a Scriban {{ if }} block in the shared segment.Unlinking a shared segment
If you want to take a shared segment as a starting point and diverge from it for a specific agent:- Click
···on the shared segment in the building section - Select Unlink
- The segment becomes a custom segment — its content is copied into the agent, and it is detached from the shared version history
- It will no longer receive updates from the shared segment and will not affect other agents
Template expressions
Every prompt in Airia is processed as a Scriban template before being sent to the model. Scriban is a lightweight templating language — expressions wrapped in{{ }} are evaluated at runtime and replaced with their values.
The prompt editor includes built-in autocomplete. Type
{{ anywhere in a segment to browse all available variables, their properties, and control-flow snippets.Syntax reference
| Pattern | What it does |
|---|---|
{{ Variable }} | Output a single value |
{{ Object.Property }} | Access a nested property |
{{ Object.Array[0] }} | Access an array element by index (zero-based) |
{{ for item in Array }}...{{ end }} | Iterate over a list |
{{ if condition }}...{{ else }}...{{ end }} | Conditional content |
{{- expression }} | Evaluate and trim whitespace before the tag |
{{ expression -}} | Evaluate and trim whitespace after the tag |
{{- expression -}} | Trim whitespace on both sides |
{{ 'literal text' }} | Output a raw string without evaluation (useful for escaping literal {{ }}) |
Available variables
Eight top-level variables are injected into every agent prompt automatically at runtime. You never declare them — they are always available.Execution
Metadata about the current run. Use this to adapt behaviour based on how and from where the agent was triggered.
| Property | Type | Description |
|---|---|---|
Execution.UserInput | string | The raw text input that triggered this execution |
Execution.ExecutionId | string | Unique identifier for this run — useful for logging and audit trails |
Execution.ConversationId | string | Conversation session ID when running inside a chat interface |
Execution.AgentId | string | ID of the currently executing agent |
Execution.AgentName | string | Display name of the currently executing agent |
Execution.ExecutionSource | string | The trigger source — see values below |
Execution.SenderEmail | string | Email address of the sender (populated on email-triggered executions only) |
Execution.FileMetadata | array | Metadata for files attached to this execution |
Execution.ImageMetadata | array | Metadata for images attached to this execution |
Execution.ExecutionSource values
| Value | When it is set |
|---|---|
Controller | Direct API call or execution from the Airia platform UI |
SlackBot | Triggered via the Slack integration |
TeamsBot | Triggered via Microsoft Teams |
WhatsAppBot | Triggered via WhatsApp |
ScheduledTrigger | Triggered by a cron / scheduled job |
EmailTrigger | Triggered by an inbound email |
WebhookTrigger | Triggered by an incoming webhook POST |
Execution.FileMetadata items
Each item in the FileMetadata array has:
| Property | Type | Description |
|---|---|---|
FileId | string | Blob storage ID of the file |
Name | string | Original filename |
Content | string | Base64-encoded file content |
ContentType | string | MIME type of the file |
Execution.ImageMetadata items
| Property | Type | Description |
|---|---|---|
ImageId | string | Blob storage ID of the image |
Name | string | Original image filename |
Base64Data | string | Base64-encoded image data |
ContentType | string | MIME type of the image |
Example — multi-channel adaptation
User
Information about the authenticated user who triggered the execution. Use this to personalise responses, apply role-based instructions, or route behaviour based on group membership.
| Property | Type | Description |
|---|---|---|
User.Id | string | User’s unique identifier |
User.Email | string | User’s email address |
User.Name | string | User’s full name |
User.FirstName | string | User’s first name |
User.LastName | string | User’s last name |
User.Roles | string[] | Roles assigned to this user |
User.Groups | string[] | Groups this user belongs to |
Example — role-aware instructions
Helpers
Runtime utilities computed fresh each execution. Currently contains one property.
| Property | Type | Description |
|---|---|---|
Helpers.CurrentDateTime | string | Current date and time in ISO 8601 format (UTC) |
Example
Inputs
Output values from the steps directly connected to this step in the agent graph (i.e., the immediate parents in the flow). Step titles become the key, with every space replaced by an underscore.
Access pattern
Steps
Output values from all steps that have already executed in the agent, regardless of their position in the graph relative to the current step. Same key convention as Inputs.
Access pattern
Access paths by step type
| Step type | Access pattern | What it contains |
|---|---|---|
| AI Model | Steps.Name.Value | The model’s full text response as a string |
| JSON Formatter | Steps.Name.Value.property | A fully navigable parsed JSON object |
| HTTP / SDK | Steps.Name.Output.Body.property | HTTP response body as a navigable object |
| Data Store Search | Steps.Name.Value | JSON string of retrieved documents |
| MarkItDown | Steps.Name.MarkdownContent | Extracted markdown text from a document |
Step names are case-sensitive and space-to-underscore converted.
My Step → Steps.My_Step. my step would not match Steps.My_Step. Rename steps carefully in the builder — references in prompts must be updated manually.Variables
Custom key-value pairs passed into the agent at execution time, defined in the agent’s Input step. Unlike Execution and User which are system-provided, Variables are explicitly configured per-agent to accept dynamic runtime parameters.
Examples
PromptVariables
Placeholder values declared inside a shared segment and supplied by each individual agent that includes that segment. This is the mechanism for parameterising shared segments without duplicating them.
Examples
InputSchema
The schema definition for the agent’s declared inputs. Useful in meta-prompting or validation steps that need to inspect the expected structure of inputs programmatically.
Example
Control flow
For loops
Iterate over any array usingfor ... in ... end.
Syntax
Conditionals
Useif, else if, and else to include or exclude sections of the prompt based on runtime values.
Syntax
Whitespace control
By default, expression tags preserve surrounding whitespace. Use dashes to trim it when precise formatting matters.| Syntax | Effect |
|---|---|
{{- expression }} | Trim whitespace before the tag |
{{ expression -}} | Trim whitespace after the tag |
{{- expression -}} | Trim whitespace on both sides |
Example — inline name without extra spaces
Complete examples
1. Personalised multi-channel assistant
A system prompt that greets the user by name, anchors the current date, and adapts its format to the channel.2. RAG agent — synthesise knowledge base results
A retrieval-augmented prompt that loops over documents from a Data Store Search step and produces a grounded answer.3. Intent-routing in a multi-step agent
A prompt that reads the classification output from an earlier step and applies the correct handling instructions for the model downstream.4. Inventory analysis from a live API
A prompt that receives structured product data from an HTTP step and performs analysis across the full dataset.5. Shared segment with PromptVariables
Shared segment (in Prompts administration):latest).
Best practices
- Start with custom segments. Custom segments are faster to iterate. Promote to shared only when the same content is genuinely needed in multiple agents.
- One purpose per segment. A segment that does one thing well can be composed in many combinations. A segment that tries to do five things is hard to version and impossible to reuse selectively.
- Name your steps descriptively. Step titles become expression keys.
Summarizeris much clearer in a prompt thanStep_3. Names are also case-sensitive — establish a naming convention for your team. - Version shared segments intentionally. Every content change creates a new version number. Write a meaningful
VersionChangeDescriptionevery time — it is the only audit trail for why the prompt changed. - Pin production agents to specific versions. Track
latestduring development; pin to a version number before deploying. This prevents a shared segment update from unintentionally changing agent behaviour in production. - Guard every dynamic reference. Before referencing
StepsorInputs, ask: could this step have run without producing output? If yes, wrap it in{{ if value != null && value != '' }}. - Use the Debug tab. Every execution records the raw output of each step. When an expression produces unexpected output, open the Debug tab, inspect the step’s actual output, and confirm the exact property path.
Troubleshooting
Expression outputs nothing or shows the raw tag text
Expression outputs nothing or shows the raw tag text
Verify you are using double curly braces:
{{ expression }}. A single brace { expression } or angle-bracket syntax <expression /> is a legacy format supported only in specific older contexts and will not be evaluated as Scriban.Step name with spaces is not resolving
Step name with spaces is not resolving
The step title in the agent builder is used as the expression key, with every space replaced by an underscore.
Capitalisation matters —
| Step title in builder | Expression key |
|---|---|
JSON Formatter | Steps.JSON_Formatter |
API Call | Steps.API_Call |
My Custom Step | Steps.My_Custom_Step |
steps.my_step will not match Steps.My_Step. If you rename a step in the builder, update all references in your prompts manually.AI Model steps use .Value — HTTP steps use .Output.Body
AI Model steps use .Value — HTTP steps use .Output.Body
The access path differs by step type. Mixing them up is the most common cause of empty or error output.Open the Debug tab on any past execution to inspect the raw output of each step and confirm the exact property path before referencing it in a prompt.
Runtime error: variable not found
Runtime error: variable not found
The template engine runs in strict mode — any reference to a variable or property that does not exist at runtime throws an error and halts execution. Common causes:
- A typo in the step name or property path
- Using
Stepsfor a step that has not run yet in the current execution - A property that exists in some API responses but is absent in others (always guard with
{{ if value != null }}) - Renaming a step in the builder without updating prompt references
- Referencing a
PromptVariablethat has not been configured for this agent
Cannot iterate over a model step output
Cannot iterate over a model step output
for requires an array — it cannot iterate over a plain string. If a model step returns a JSON array as text, add a JSON Formatter step after it to parse the string into a structured object. Then reference Steps.JSON_Formatter.Value.items in your loop.Shared segment edit did not update my agent
Shared segment edit did not update my agent
PromptVariables resolve as empty strings
PromptVariables resolve as empty strings
PromptVariables must be configured per-agent in the shared segment’s settings panel within the building section. If a PromptVariable appears empty:- Confirm the shared segment has been added to the agent (not just the library)
- Open the segment’s settings panel in the building section and verify all variable values are filled in
- Check that the variable name in the segment (
{{ PromptVariables.tone }}) exactly matches the key configured in the agent (case-sensitive)
Prompt contains literal {{ }} that should not be evaluated
Prompt contains literal {{ }} that should not be evaluated
If your prompt text includes
{{ }} as part of the content sent to the model — not as a platform expression — the engine will try to resolve it as a variable and fail. Wrap the literal text in a Scriban string literal so it is output verbatim:{{ '...' }} tells the engine to treat the enclosed content as a plain string and output it as-is.Tip: If you do not actually need the curly braces in the final prompt text, the simplest fix is to remove them entirely.