Skip to main content
Prompts in Airia are more than instruction text. They are versioned, composable assets that sit at the intersection of how an agent thinks, what it knows about the person it’s talking to, and what happened in the steps before it. Getting prompts right is the single biggest lever on agent quality. This page covers everything: the segment architecture, how versioning actually works under the hood, the complete expression language reference, and patterns for building prompts that scale across dozens of agents.

Two kinds of prompts

System Prompts

Added to an AI model step inside an agent. Sets the model’s persona, constraints, reasoning style, and access to runtime context. Evaluated before the user message every time the step runs.

User Prompts

Templates surfaced in chat-focused agent deployments (see Skills). Controls how the user’s message is structured before it reaches the model. Useful for scaffolding multi-field inputs.
Unless you are building a chat skill with structured input, you are working with system prompts.

The segment architecture

A prompt is not a single text block. It is an ordered list of segments that are joined top-to-bottom and sent to the model as one assembled string. This lets you compose reusable building blocks into agent-specific instructions without copy-pasting.
┌─────────────────────────────────────────────────────┐
│                 Consolidated Prompt                  │
│    (read-only preview — what the model receives)     │
└──────────┬──────────────┬──────────────┬────────────┘
           │              │              │
    ┌──────▼─────┐ ┌──────▼──────┐ ┌────▼────────┐
    │  Custom    │ │   Shared    │ │   Shared    │
    │  Segment   │ │  Segment A  │ │  Segment B  │
    │ (this      │ │  v3 pinned  │ │  (latest)   │
    │  agent     │ │             │ │             │
    │  only)     │ │             │ │             │
    └────────────┘ └─────────────┘ └─────────────┘
The editor has three areas:
1

Consolidated Prompt View

A live, read-only assembly of all segments in order — exactly as the model will receive them at runtime. Use this to sanity-check spacing, flow, and variable placement before publishing.
2

Building Section

Where you author and arrange the segments that make up this agent’s prompt. Drag handles let you reorder; each segment can be expanded, collapsed, or removed independently.
3

Shared Prompts Library

A searchable panel of every shared segment available in your workspace. Drag any segment into the building section to include it, then pick which version to pin.

Segment types in depth

Understanding the difference between custom and shared segments — and when to use each — is foundational to building maintainable agents.
A custom segment belongs entirely to one agent. It is:
  • Authored inline in the building section — no separate editor or admin page
  • Versioned with the agent — when you publish a new version of the agent, the current state of all custom segments is captured as part of that agent version
  • Invisible to other agents — it does not appear in the shared library and cannot be referenced elsewhere
  • Freely editable at any time — no coordination with other teams required
Custom segments are stored with IsAgentSpecific: true internally. This flag excludes them from the workspace prompt library so they never appear in another agent’s segment picker.When to use:
  • You are building a new agent and iterating quickly
  • The instructions are specific to this agent’s domain, persona, or step configuration
  • You are prototyping something you may later promote to a shared segment
  • The content is genuinely unique and will never be needed elsewhere
Once you find yourself copying the same custom segment into a third agent, that is the signal to promote it. Shared segments give you one place to update and one version history to track.

Versioning

Versioning is how Airia ensures prompt changes are intentional, traceable, and safe to roll out incrementally. The model differs between custom and shared segments.

Custom segment versioning

Custom segments do not have their own independent version history. Their content is captured as part of the agent version each time you publish. To see how a custom segment looked in a previous release, view the corresponding agent version in the agent’s version history panel. This means:
  • There is no v1, v2 concept for a custom segment in isolation
  • Rollback means rolling back the entire agent version
  • You can freely edit a custom segment between publishes without creating intermediate versions

Shared segment versioning

Shared segments have a first-class version lifecycle that is completely independent of any agent. How a new version is created: When you save a shared segment in the Prompts administration area, Airia compares the new content against the current latest version:
What changedWhat happens
Only the change descriptionThe description is updated on the current version. No new version is created. Version number does not increment.
The prompt content (any text change)A new version is created. The version number increments by 1. The new version becomes IsLatest: true. The previous version’s IsLatest flag is set to false.
This means version numbers are content-only milestones — cosmetic description fixes never clutter the history. Pinning in agents: When you add a shared segment to an agent, you choose which version to use:
Pin modeBehaviour
Specific version (e.g., v3)The agent always uses exactly that version. New versions published to the shared segment have no effect until you manually update the pin.
LatestThe agent always uses whatever version is currently marked IsLatest. Publishing a new version of the shared segment takes effect immediately for this agent on the next execution.
Pinning to a specific version is safer for production agents. Pinning to latest is convenient during development when a shared segment is actively being iterated.
Viewing version history: In the Prompts administration area, select any shared segment to view its full version list — version number, change description, author, and timestamp. You can preview the content of any past version.

PromptVariables: parameterising shared segments

PromptVariables solve a specific problem: you want a shared segment that is reused across many agents, but each agent needs to supply slightly different values to it. Rather than maintaining many near-identical shared segments, you declare placeholders inside the shared segment using {{ PromptVariables.name }}, and each agent that includes the segment supplies its own value for name. Shared segment definition (in Prompts administration):
You are a customer support assistant for {{ PromptVariables.brand_name }}.
Always respond in {{ PromptVariables.response_language }}.
Your tone should be {{ PromptVariables.tone }}.
Agent A supplies:
brand_name = "Acme Corp"
response_language = "English"
tone = "friendly and concise"
Agent B supplies:
brand_name = "GlobalBank"
response_language = "formal French"
tone = "professional and precise"
Both agents share the same segment and the same version history. The PromptVariables values are configured per-agent in the segment’s settings panel after you add it to the building section.
PromptVariables values are always strings. If you need conditional logic based on a PromptVariable, use a Scriban {{ if }} block in the shared segment.

Unlinking a shared segment

If you want to take a shared segment as a starting point and diverge from it for a specific agent:
  1. Click ··· on the shared segment in the building section
  2. Select Unlink
  3. The segment becomes a custom segment — its content is copied into the agent, and it is detached from the shared version history
  4. It will no longer receive updates from the shared segment and will not affect other agents
Unlinking is permanent. If you want to re-attach, you must remove the custom segment and re-add the shared segment from the library.

Template expressions

Every prompt in Airia is processed as a Scriban template before being sent to the model. Scriban is a lightweight templating language — expressions wrapped in {{ }} are evaluated at runtime and replaced with their values.
The prompt editor includes built-in autocomplete. Type {{ anywhere in a segment to browse all available variables, their properties, and control-flow snippets.

Syntax reference

PatternWhat it does
{{ Variable }}Output a single value
{{ Object.Property }}Access a nested property
{{ Object.Array[0] }}Access an array element by index (zero-based)
{{ for item in Array }}...{{ end }}Iterate over a list
{{ if condition }}...{{ else }}...{{ end }}Conditional content
{{- expression }}Evaluate and trim whitespace before the tag
{{ expression -}}Evaluate and trim whitespace after the tag
{{- expression -}}Trim whitespace on both sides
{{ 'literal text' }}Output a raw string without evaluation (useful for escaping literal {{ }})
The template engine runs in strict mode. Referencing a variable or property that does not exist at runtime throws an error and halts the execution. Always guard optional values with {{ if value != null }} checks.

Available variables

Eight top-level variables are injected into every agent prompt automatically at runtime. You never declare them — they are always available.

Execution

Metadata about the current run. Use this to adapt behaviour based on how and from where the agent was triggered.
PropertyTypeDescription
Execution.UserInputstringThe raw text input that triggered this execution
Execution.ExecutionIdstringUnique identifier for this run — useful for logging and audit trails
Execution.ConversationIdstringConversation session ID when running inside a chat interface
Execution.AgentIdstringID of the currently executing agent
Execution.AgentNamestringDisplay name of the currently executing agent
Execution.ExecutionSourcestringThe trigger source — see values below
Execution.SenderEmailstringEmail address of the sender (populated on email-triggered executions only)
Execution.FileMetadataarrayMetadata for files attached to this execution
Execution.ImageMetadataarrayMetadata for images attached to this execution
Execution.ExecutionSource values
ValueWhen it is set
ControllerDirect API call or execution from the Airia platform UI
SlackBotTriggered via the Slack integration
TeamsBotTriggered via Microsoft Teams
WhatsAppBotTriggered via WhatsApp
ScheduledTriggerTriggered by a cron / scheduled job
EmailTriggerTriggered by an inbound email
WebhookTriggerTriggered by an incoming webhook POST
Execution.FileMetadata items Each item in the FileMetadata array has:
PropertyTypeDescription
FileIdstringBlob storage ID of the file
NamestringOriginal filename
ContentstringBase64-encoded file content
ContentTypestringMIME type of the file
Execution.ImageMetadata items
PropertyTypeDescription
ImageIdstringBlob storage ID of the image
NamestringOriginal image filename
Base64DatastringBase64-encoded image data
ContentTypestringMIME type of the image
Example — multi-channel adaptation
You are responding as {{ Execution.AgentName }}.
User request: {{ Execution.UserInput }}

{{ if Execution.ExecutionSource == 'SlackBot' }}
Format your response for Slack: use bullet points, keep it under 200 words, and add relevant emoji.
{{ else if Execution.ExecutionSource == 'EmailTrigger' }}
Format your response as a professional email reply. The sender's address is {{ Execution.SenderEmail }}.
{{ else if Execution.ExecutionSource == 'TeamsBot' }}
Format your response for Microsoft Teams: professional tone, clear sections, no emoji.
{{ else }}
Provide a thorough, well-structured answer.
{{ end }}

{{ if Execution.FileMetadata.size > 0 }}
The user has attached {{ Execution.FileMetadata.size }} file(s):
{{ for file in Execution.FileMetadata }}
- {{ file.Name }} ({{ file.ContentType }})
{{ end }}
{{ end }}

User

Information about the authenticated user who triggered the execution. Use this to personalise responses, apply role-based instructions, or route behaviour based on group membership.
PropertyTypeDescription
User.IdstringUser’s unique identifier
User.EmailstringUser’s email address
User.NamestringUser’s full name
User.FirstNamestringUser’s first name
User.LastNamestringUser’s last name
User.Rolesstring[]Roles assigned to this user
User.Groupsstring[]Groups this user belongs to
Example — role-aware instructions
You are assisting {{ User.FirstName }} {{ User.LastName }} ({{ User.Email }}).

{{ if User.Roles contains 'Admin' }}
This user is a workspace administrator. You may discuss internal configuration, usage statistics, and billing details.
{{ else if User.Groups contains 'Engineering' }}
This user is in the Engineering group. Include technical depth, code samples, and architectural context where relevant.
{{ else }}
Provide a clear, non-technical explanation. Avoid internal jargon.
{{ end }}

Helpers

Runtime utilities computed fresh each execution. Currently contains one property.
PropertyTypeDescription
Helpers.CurrentDateTimestringCurrent date and time in ISO 8601 format (UTC)
This is the most important grounding tool for time-sensitive agents. Language models have training cutoffs and will hallucinate dates if not explicitly anchored.
Example
Today's date and time is {{ Helpers.CurrentDateTime }}.
When answering anything time-sensitive — deadlines, holidays, news events — use this date as your reference.
Do not estimate or infer the current date from your training data.

Inputs

Output values from the steps directly connected to this step in the agent graph (i.e., the immediate parents in the flow). Step titles become the key, with every space replaced by an underscore.
Access pattern
{{ Inputs.Step_Name.Value }}
Use Inputs when you specifically want to scope references to the steps wired directly into the current node. Use Steps (below) to reach any step that has already run in the agent, regardless of graph position.
{{ Inputs.Summarizer.Value }}

Steps

Output values from all steps that have already executed in the agent, regardless of their position in the graph relative to the current step. Same key convention as Inputs.
Access pattern
{{ Steps.Step_Name.Value }}
{{ Steps.Classification_Step.Value }}

Access paths by step type

Step typeAccess patternWhat it contains
AI ModelSteps.Name.ValueThe model’s full text response as a string
JSON FormatterSteps.Name.Value.propertyA fully navigable parsed JSON object
HTTP / SDKSteps.Name.Output.Body.propertyHTTP response body as a navigable object
Data Store SearchSteps.Name.ValueJSON string of retrieved documents
MarkItDownSteps.Name.MarkdownContentExtracted markdown text from a document
Step names are case-sensitive and space-to-underscore converted. My StepSteps.My_Step. my step would not match Steps.My_Step. Rename steps carefully in the builder — references in prompts must be updated manually.

Variables

Custom key-value pairs passed into the agent at execution time, defined in the agent’s Input step. Unlike Execution and User which are system-provided, Variables are explicitly configured per-agent to accept dynamic runtime parameters.
Examples
{{ Variables.documentType }}
{{ Variables.language }}
{{ Variables.threshold }}
{{ Variables.customerTier }}
Variables are always strings. Define them in the Input step’s variable configuration and supply their values when triggering the agent — via the API, a scheduled trigger, or a connected step in a parent agent.

PromptVariables

Placeholder values declared inside a shared segment and supplied by each individual agent that includes that segment. This is the mechanism for parameterising shared segments without duplicating them.
Examples
{{ PromptVariables.tone }}
{{ PromptVariables.response_language }}
{{ PromptVariables.brand_name }}
{{ PromptVariables.output_format }}
These values are configured per-agent in the shared segment’s settings panel within the building section. They are resolved before the assembled prompt is sent to the model.
PromptVariables are only meaningful inside shared segments. Using them in a custom segment will result in empty strings at runtime, since there is no source to supply the values.

InputSchema

The schema definition for the agent’s declared inputs. Useful in meta-prompting or validation steps that need to inspect the expected structure of inputs programmatically.
Example
{{ InputSchema.fieldName }}
This is an advanced variable. Most agents do not need it — it is primarily relevant when building agents that introspect or document their own input requirements.

Control flow

For loops

Iterate over any array using for ... in ... end.
Syntax
{{ for item in someArray }}
  {{ item }}
{{ end }}
Here are the retrieved documents:

{{ for doc in Steps.DataStore_Search.Value.documents }}
---
Source: {{ doc.source }}
{{ doc.content }}
{{ end }}
for requires an actual array. It cannot iterate over a plain string. If a model step returns a JSON array as text, add a JSON Formatter step to parse it into a structured object first.

Conditionals

Use if, else if, and else to include or exclude sections of the prompt based on runtime values.
Syntax
{{ if condition }}
  ...
{{ else if other_condition }}
  ...
{{ else }}
  ...
{{ end }}
{{ if Execution.ExecutionSource == 'SlackBot' }}
Respond in Slack. Be concise, use bullet points, and add emoji where appropriate.
{{ else if Execution.ExecutionSource == 'EmailTrigger' }}
Respond via email. Use formal structure with a brief summary at the top.
{{ else }}
Provide a detailed, well-structured response.
{{ end }}

Whitespace control

By default, expression tags preserve surrounding whitespace. Use dashes to trim it when precise formatting matters.
SyntaxEffect
{{- expression }}Trim whitespace before the tag
{{ expression -}}Trim whitespace after the tag
{{- expression -}}Trim whitespace on both sides
Example — inline name without extra spaces
Hello,{{- ' ' -}}{{ User.FirstName }}. How can I help you today?

Complete examples

1. Personalised multi-channel assistant

A system prompt that greets the user by name, anchors the current date, and adapts its format to the channel.
You are a helpful assistant for {{ User.FirstName }} {{ User.LastName }}.
Today is {{ Helpers.CurrentDateTime }}. Use this as your reference for all time-sensitive reasoning.

{{ if Execution.ExecutionSource == 'SlackBot' }}
You are responding in Slack. Keep your answer under 150 words, use bullet points, and add a relevant emoji at the start.
{{ else if Execution.ExecutionSource == 'EmailTrigger' }}
You are responding via email to {{ Execution.SenderEmail }}. Write a professional reply with a summary at the top, followed by details.
{{ else if Execution.ExecutionSource == 'TeamsBot' }}
You are responding in Microsoft Teams. Professional tone, clear sections, concise paragraphs. No emoji.
{{ else }}
Provide a thorough, well-structured response with clear headings where helpful.
{{ end }}

User request: {{ Execution.UserInput }}

2. RAG agent — synthesise knowledge base results

A retrieval-augmented prompt that loops over documents from a Data Store Search step and produces a grounded answer.
Based on the user's query "{{ Execution.UserInput }}", the knowledge base returned the following documents.

Synthesise them into a clear, accurate answer. Cite the source name for each claim.
If the documents do not contain enough information to answer confidently, say so explicitly — do not speculate.

{{ if Steps.DataStore_Search.Value != null && Steps.DataStore_Search.Value != '' }}
{{ for doc in Steps.DataStore_Search.Value.documents }}
---
Source: {{ doc.source }}
{{ doc.content }}
{{ end }}
{{ else }}
No documents were retrieved. Answer from your general knowledge and flag any uncertainty.
{{ end }}

3. Intent-routing in a multi-step agent

A prompt that reads the classification output from an earlier step and applies the correct handling instructions for the model downstream.
## Request context
Detected intent:  {{ Steps.Intent_Classifier.Value }}
Customer name:    {{ Steps.Entity_Extractor.Value.customer.name }}
Account ID:       {{ Steps.Entity_Extractor.Value.customer.accountId }}
Submission date:  {{ Helpers.CurrentDateTime }}

## Handling instructions
{{ if Steps.Intent_Classifier.Value == 'billing' }}
This is a billing enquiry. Retrieve the customer's invoice history and explain any outstanding charges clearly. Avoid technical jargon. Offer to escalate to the finance team if the issue is unresolved.
{{ else if Steps.Intent_Classifier.Value == 'technical' }}
This is a technical support request. Diagnose the issue step by step and offer at least two resolution paths. Link to relevant documentation where available.
{{ else if Steps.Intent_Classifier.Value == 'cancellation' }}
This is a cancellation request. Acknowledge the intent, ask one clarifying question about the reason, and present one retention offer before proceeding with cancellation steps.
{{ else }}
Handle this as a general enquiry. Be helpful and clear, and suggest routing to a human specialist if the topic falls outside your scope.
{{ end }}

## User message
{{ Execution.UserInput }}

4. Inventory analysis from a live API

A prompt that receives structured product data from an HTTP step and performs analysis across the full dataset.
The inventory system returned {{ Steps.Product_API.Output.Body.total }} products as of {{ Helpers.CurrentDateTime }}.

Review the list below and:
1. Identify all items with stock below 10 units (at-risk of stockout)
2. Flag items with a price-to-stock ratio above 100 (high-value low-stock)
3. Recommend a reorder priority (High / Medium / Low) for each flagged item

{{ for product in Steps.Product_API.Output.Body.data.products }}
- {{ product.name }} (SKU: {{ product.sku }})
  Price: ${{ product.price }} | Stock: {{ product.stock }} | Category: {{ product.category }}
{{ end }}

5. Shared segment with PromptVariables

Shared segment (in Prompts administration):
You are a customer service assistant representing {{ PromptVariables.brand_name }}.

Always respond in {{ PromptVariables.response_language }}.
Your communication style should be {{ PromptVariables.tone }}.

{{ if PromptVariables.escalation_email != '' }}
If a query is outside your scope, direct the user to {{ PromptVariables.escalation_email }} for further support.
{{ end }}
Agent A (consumer electronics brand):
brand_name          = Nexar Tech
response_language   = English
tone                = friendly, casual, and solution-focused
escalation_email    = support@nexartech.com
Agent B (enterprise SaaS):
brand_name          = CloudCore Platform
response_language   = formal English
tone                = precise and professional
escalation_email    = enterprise-support@cloudcore.io
Both agents use the same segment version. Updating the shared segment’s instructions automatically propagates to both agents (if they are pinned to latest).

Best practices

  • Start with custom segments. Custom segments are faster to iterate. Promote to shared only when the same content is genuinely needed in multiple agents.
  • One purpose per segment. A segment that does one thing well can be composed in many combinations. A segment that tries to do five things is hard to version and impossible to reuse selectively.
  • Name your steps descriptively. Step titles become expression keys. Summarizer is much clearer in a prompt than Step_3. Names are also case-sensitive — establish a naming convention for your team.
  • Version shared segments intentionally. Every content change creates a new version number. Write a meaningful VersionChangeDescription every time — it is the only audit trail for why the prompt changed.
  • Pin production agents to specific versions. Track latest during development; pin to a version number before deploying. This prevents a shared segment update from unintentionally changing agent behaviour in production.
  • Guard every dynamic reference. Before referencing Steps or Inputs, ask: could this step have run without producing output? If yes, wrap it in {{ if value != null && value != '' }}.
  • Use the Debug tab. Every execution records the raw output of each step. When an expression produces unexpected output, open the Debug tab, inspect the step’s actual output, and confirm the exact property path.

Troubleshooting

Verify you are using double curly braces: {{ expression }}. A single brace { expression } or angle-bracket syntax <expression /> is a legacy format supported only in specific older contexts and will not be evaluated as Scriban.
The step title in the agent builder is used as the expression key, with every space replaced by an underscore.
Step title in builderExpression key
JSON FormatterSteps.JSON_Formatter
API CallSteps.API_Call
My Custom StepSteps.My_Custom_Step
Capitalisation matters — steps.my_step will not match Steps.My_Step. If you rename a step in the builder, update all references in your prompts manually.
The access path differs by step type. Mixing them up is the most common cause of empty or error output.
{{ Steps.My_Model.Value }}
Open the Debug tab on any past execution to inspect the raw output of each step and confirm the exact property path before referencing it in a prompt.
The template engine runs in strict mode — any reference to a variable or property that does not exist at runtime throws an error and halts execution. Common causes:
  • A typo in the step name or property path
  • Using Steps for a step that has not run yet in the current execution
  • A property that exists in some API responses but is absent in others (always guard with {{ if value != null }})
  • Renaming a step in the builder without updating prompt references
  • Referencing a PromptVariable that has not been configured for this agent
for requires an array — it cannot iterate over a plain string. If a model step returns a JSON array as text, add a JSON Formatter step after it to parse the string into a structured object. Then reference Steps.JSON_Formatter.Value.items in your loop.
Agents pinned to a specific version number will not receive updates automatically. After publishing a new version of a shared segment:
  1. Open the agent in Agent Studio
  2. Locate the shared segment in the building section
  3. Change the version pin from the old version number to the new one (or to latest)
  4. Save and publish the agent
Only agents pinned to latest receive new shared segment versions automatically.
PromptVariables must be configured per-agent in the shared segment’s settings panel within the building section. If a PromptVariable appears empty:
  • Confirm the shared segment has been added to the agent (not just the library)
  • Open the segment’s settings panel in the building section and verify all variable values are filled in
  • Check that the variable name in the segment ({{ PromptVariables.tone }}) exactly matches the key configured in the agent (case-sensitive)
If your prompt text includes {{ }} as part of the content sent to the model — not as a platform expression — the engine will try to resolve it as a variable and fail. Wrap the literal text in a Scriban string literal so it is output verbatim:
Generate a report for {{fiscal_year}} following the {{report_template}} standard.
{{ '...' }} tells the engine to treat the enclosed content as a plain string and output it as-is.Tip: If you do not actually need the curly braces in the final prompt text, the simplest fix is to remove them entirely.