Skip to main content
Every AI Model step in Agent Studio has a Context panel that controls which runtime information is sent alongside your prompt. These settings determine what the model can see beyond your instructions — the current time, who is asking, what files are attached, and how much of the conversation it remembers. Getting context right has a direct impact on answer quality, token cost, and whether the model behaves consistently across different trigger sources.

Quick reference

SettingDefaultWhat it does
Date and time contextOnAdds a system message with the current date and time
TimezoneUTCControls which timezone the date/time is presented in
User details contextOnAdds a system message with the user’s profile and preferences
Always include user inputOnEnsures the original user message reaches the model, even when the step is not directly connected to the Input step
Include AttachmentsOnPasses uploaded files and images into the model’s context
Include the chat historyAll historySends previous conversation turns so the model can follow multi-turn dialogue

Date and time context

When enabled, Airia injects a system message before your prompt:
The current date and time is 2026-03-26T14:30:00+00:00;
please use this information for any references to the present moment.
The timestamp is generated fresh on every execution and formatted in ISO 8601. The model uses it to ground any time-sensitive reasoning — deadlines, scheduling, relative dates (“last week”), or seasonal context.
  • Agents that answer questions about dates, deadlines, or schedules
  • Workflow agents triggered on a cron that need to know “today”
  • Any prompt that includes phrases like “as of today” or “this quarter”
Even with this setting enabled, you can also reference the same value inside your prompt text using the {{ Helpers.CurrentDateTime }} expression. The context setting provides it as a separate system message; the expression lets you embed it inline. You can use both — they complement each other.

Timezone

Controls the timezone applied to the date/time context message. The dropdown offers three modes:
ModeBehaviourBest for
UTC (default)Timestamp is always in UTC (+00:00)Backend agents, cron jobs, internal tooling where a consistent reference is preferred
CustomA fixed IANA timezone you choose (e.g., America/New_York, Europe/London)Agents that serve a known geography — e.g., a customer support bot for US East Coast hours
ClientUses the end user’s browser timezone, sent at execution timeGlobal-facing chat agents where each user expects times in their own local zone
When a custom or client timezone is set, the injected message includes the timezone name:
The current date and time is 2026-03-26T10:30:00-04:00 (Eastern Daylight Time);
please use this information for any references to the present moment.
If the client timezone is unavailable at runtime (e.g., the agent was triggered via API without timezone data), Airia falls back to UTC silently.

User details context

When enabled, Airia injects a system message containing the authenticated user’s profile:
Consider the following preferences and information about the user
when crafting your response:
{"General":"Senior account manager at Acme Corp","WorkDescription":"Manages enterprise renewals","PersonalPreferences":"Prefers concise bullet points over long paragraphs"}
This includes three fields from the user’s Airia profile:
FieldWhat it contains
GeneralFree-text bio or role description
WorkDescriptionThe user’s job function or responsibilities
PersonalPreferencesCommunication style, formatting, or language preferences
  • Personalised assistants that adapt tone, detail level, or vocabulary to the user
  • Agents that need to know the user’s role to scope their answers (e.g., executive summary vs. technical deep-dive)
  • Internal tools where users have filled in their Airia profile
User profile data is separate from the {{ User.FirstName }}, {{ User.Email }} template expressions. The context setting sends the full profile JSON as a system message. The expressions let you embed individual fields inline in your prompt. See the Prompts page for the full list of User properties.

Always include user input

In a multi-step agent, not every AI Model step is directly connected to the Input step. For example, a three-step pipeline might look like:
Input → Step 1 (Classifier) → Step 2 (Responder)
Step 2 receives its input from Step 1, not from Input. Without this setting, Step 2 would only see the classifier’s output — it would have no idea what the user originally asked. When Always include user input is enabled (the default), Airia ensures the original user message is added to the model’s context regardless of graph wiring.
  • Most agents — you almost always want the model to see the user’s original question
  • Multi-step agents where downstream steps need the original request for grounding
  • Agents where the upstream step transforms or summarises the input, but the final model still needs the raw question
Disabling this on a step that is the only AI Model in your agent usually means the model receives no user message at all. Only disable it when the step genuinely should not see the original input.

Include Attachments

When enabled, any files or images the user uploads with their message are added to the model’s context. This includes:
Included dataDescription
File nameThe original filename (e.g., Q1-report.pdf)
Content typeMIME type (e.g., application/pdf, image/png)
File contentThe file’s content, processed for the model to read
  • Document Q&A agents (“summarise this PDF”, “extract the table from this image”)
  • Agents that analyse uploaded images (charts, screenshots, photos)
  • Any agent where users are expected to attach files as part of their workflow
Not all models support all file types. Vision-capable models (GPT-4o, Claude 3.5 Sonnet, Gemini Pro Vision, etc.) can process images natively. For non-vision models, only text-extractable content (like parsed PDFs) will be useful. Check your model’s capabilities before relying on attachment processing.

Include the chat history

Controls how much of the conversation the model can see. Click the dropdown to choose one of three modes:

All history

Every previous message in the conversation is included. The model has full context of everything said so far.

Last N messages

Only the most recent N messages are included. You specify the number. Older messages are silently dropped.

No history

No previous messages are included. The model treats every request as if it is the first message in the conversation.

Choosing the right mode

ModeBest forTrade-off
All historyConversational assistants, multi-turn reasoning, coaching botsToken usage grows with conversation length. Long conversations can hit context window limits.
Last N messagesAssistants with long sessions where only recent context matters (e.g., customer support, troubleshooting)Good balance of continuity and cost control. Experiment with N — 10–20 messages is a common starting point.
No historySingle-shot tasks (classification, data extraction), webhook-triggered agents, scheduled pipelinesNo conversational memory. Each execution is stateless. Lowest token cost.
If your agent uses the Debug tab and you notice the model is repeating itself or contradicting earlier answers, check your chat history setting. “All history” with very long conversations can push important context out of the model’s effective attention window. Switching to “Last N” often fixes this.

How context settings relate to prompt expressions

Context settings and prompt expressions are two complementary ways to give the model runtime information. They are not mutually exclusive — many agents use both.
MechanismHow it worksWhen to use
Context settings (this page)Airia injects pre-formatted system messages automatically. You do not write them — they appear alongside your prompt.When you want the information available to the model without embedding it in your prompt text. Simple toggle, zero prompt editing.
Prompt expressions ({{ }})You write Scriban expressions directly inside your prompt. They are evaluated at runtime and replaced with values.When you need precise control over wording, placement, or conditional logic around the data.
Example — Date/time both ways: The Date and time context toggle adds this system message automatically:
The current date and time is 2026-03-26T14:30:00+00:00;
please use this information for any references to the present moment.
The prompt expression {{ Helpers.CurrentDateTime }} lets you embed the same timestamp inside your own instructions:
Today is {{ Helpers.CurrentDateTime }}. Any deadline the user mentions
should be compared against this date.
Using both is fine — the model sees the information twice, but from different angles. The context setting provides a generic grounding message; the expression lets you frame it specifically for your use case.
A general-purpose chat agent that talks to end users.
SettingValueWhy
Date and time contextOnUsers ask time-sensitive questions
TimezoneClientEach user sees times in their own zone
User details contextOnPersonalises tone and depth
Always include user inputOnModel always sees the question
Include AttachmentsOnUsers may share files
Include the chat historyAll historyMulti-turn conversation requires full context

Troubleshooting

Enable Date and time context in the Context panel. Without it, the model can only guess the date based on its training data — which will be wrong.If it is already enabled and the model still gets the date wrong, check whether your prompt contains conflicting date references (e.g., a hardcoded “Today is January 1, 2025” in a shared segment). The model may trust the explicit prompt text over the injected system message.
This usually happens with All history on long conversations. As the conversation grows, the model’s effective attention can drift. Try switching to Last N messages (start with 10–20) to keep the context focused on the recent exchange.
Two common causes:
  • Chat history set to All: Every previous message is sent on every turn. A 50-message conversation means 50+ messages in the context window on the 51st turn.
  • Attachments enabled with large files: Uploaded documents consume significant tokens. If your agent does not process files, disable Include Attachments.
Use the Debug tab on any execution to inspect the full context sent to the model and identify which component is consuming the most tokens.
Check that Always include user input is enabled on the AI Model step that needs to see the original message. In multi-step flows, downstream steps do not receive the user’s input automatically unless this setting is on.