Skip to main content
Memory Objects let agents persist data across executions. Use them to store user preferences, summaries, workflow state, or other information that should survive beyond a single run.

What Memory Is For

Memory is best suited for:
  • Storing structured or semi-structured context that should persist over time
  • Personalizing responses based on previous interactions
  • Tracking progress in multi-step workflows
  • Saving summaries instead of replaying full chat history every time
Memory is not the same as chat history. For most conversational use cases, use the AI Model step’s built-in Include Chat History setting first, and use Memory only when you need more control.

Create a Memory Object

1

Open your project

Navigate to Projects and select the project you want to work in.
2

Go to the Memory tab

Open the Memory tab from the top navigation.
3

Create a Memory Object

Click New Memory, then enter a name for the Memory Object.
4

Configure scoping

Optionally configure how the memory should be isolated. See Memory Scoping.

Use Memory in an Agent

There are two ways to work with Memory in an agent:
Use Memory Steps when memory access should happen deterministically on every execution.
  1. Open your agent in Agent Studio.
  2. Drag a Memory Load or Memory Store step onto the canvas.
  3. Open the step configuration and select the Memory Object.
  4. Connect the step into your workflow.
  5. In the AI Model step’s Instructions field, add <memory value="Your Memory Name" /> exactly where you want the memory contents inserted
Recommended pattern
  • Place Memory Load near the start of the workflow
  • Send both the user input and loaded memory into the AI Model step
  • Place Memory Store after the AI Model step to save updated context
Connecting a Memory Load step to the AI Model step is not enough by itself. The AI Model prompt must explicitly include a <memory> tag that references the Memory Object by name.
Use Memory Steps when memory should always be loaded or saved as part of the workflow.

Required: Reference Memory in Your Prompt

The <memory> tag is required for Memory Steps to work.Adding a Memory Load step does not automatically make the AI Model use the memory. You must explicitly reference the Memory Object in the AI Model prompt using the <memory> tag. Without this tag, the loaded memory is not injected into the model’s prompt context.
Use this syntax in the AI Model step’s Instructions field:
<memory value="My Memory Name" />
The value must match the exact name of the Memory Object, not its ID. Example:
You are a helpful assistant. Use the following memory from previous interactions:

<memory value="Chat History Summary" />

Use this information when it is relevant to the user's request.
You can place the tag anywhere in the prompt where you want the memory contents to appear. That placement determines where the loaded memory is inserted into the model context.
The canvas connection shows data flow between steps, but it does not inject memory into the AI Model prompt automatically. The <memory> tag in the AI Model Instructions field is what makes the loaded memory available to the model.

Load vs Store

Memory Load

Retrieves the current contents of a Memory Object and passes them downstream.

Memory Store

Writes data into a Memory Object. By default, Memory Store overwrites the full contents of the Memory Object. If you enable Append Text, new content is added to the existing memory instead of replacing it.
When Append Text is off, Store replaces everything currently in memory. If you need to preserve existing content, first load the memory, merge the new data, and then store the combined result.

Memory Scoping

Scoping determines how memory is isolated.
ScopeisUserSpecificisConversationScopedBehavior
GlobalfalsefalseOne shared memory for all users and conversations
User-specifictruefalseEach user has a separate memory
Conversation-scopedfalsetrueEach conversation has a separate memory
User + ConversationtruetrueSeparate memory per user per conversation
When both isUserSpecific and isConversationScoped are enabled, memory is isolated by the combination of user and conversation.

Pass Identity for Scoped Memory

If your Memory Object uses user or conversation scoping, execution requests must include the relevant identifiers.
  • userId or externalUserId for user-specific memory
  • conversationId for conversation-scoped memory
POST /v2/PipelineExecution/{pipelineId}
{
  "userInput": "What's the weather like?",
  "userId": "user-123",
  "conversationId": "conv-456"
}
If scoped memory is configured but the required identifiers are missing, Memory Load may return empty results or target the wrong memory instance.

Common Use Cases

Store persistent settings such as tone, language, interests, or formatting preferences.
Typical flow:
1. Load the existing preferences
2. Detect updates from the current interaction
3. Save the updated preferences
4. Use isUserSpecific: true
Track progress in multi-step tasks such as onboarding, form completion, or guided workflows.
Typical flow:
1. Load the current state
2. Determine the next step
3. Store the updated state
4. Use isUserSpecific and isConversationScoped when the workflow should stay isolated to a single user session
Store a rolling summary of prior conversations instead of sending raw message history on every run.
This is useful when:
- Full chat history is too long
- Only key facts should persist
- You want lower token usage with retained context
Use global memory for data that should be shared across all executions, such as common instructions, shared reference context, or team-level settings.
Only use this when the data is truly shared.

Chat History vs Memory

For most chatbot use cases, start with the AI Model step’s built-in Include Chat History setting. Use Chat History when:
  • You want recent messages automatically included
  • You do not need custom storage logic
  • You want the fastest setup
Use Memory when:
  • You want to persist summaries or extracted facts
  • You need custom control over what gets stored
  • You want memory to outlive a single conversation
  • You need deterministic read/write behavior in a workflow

Troubleshooting

Check the following:
  • A Memory Store step is connected and actually executes
  • The correct Memory Object is selected
  • Scoped executions are passing the same userId and/or conversationId
Common causes:
  • No prior write has occurred
  • A Store step overwrote the memory with empty or partial content
  • Scoped identifiers do not match the expected user or conversation
This usually means the AI Model prompt does not include a <memory> tag.
Check the following:
- The AI Model Instructions field includes <memory value="Your Memory Name" />
- The value matches the exact Memory Object name
- The Memory Load step runs before the AI Model step

Without the <memory> tag, the loaded memory is not injected into the model's
prompt context, even if the steps are connected on the canvas.
Verify:
  • Scoping is configured correctly
  • The execution request includes the expected identifiers
  • The same user and conversation values are used consistently across runs

Best Practices

  • Default to user-specific memory unless the data should truly be shared
  • Keep memory concise to reduce token usage and latency
  • Prefer storing summaries, facts, and state, not raw transcript dumps
  • Load before Store when you need to preserve existing content
  • Use Memory Steps for deterministic workflows
  • Use Memory Tools when the LLM should decide when memory is relevant
  • Always add a <memory> tag in the AI Model prompt when using Memory Load steps
For most agents, this is the safest starting pattern:
  1. Load memory at the beginning of the workflow
  2. Add <memory value="Your Memory Name" /> in the AI Model step’s Instructions field where you want the memory to appear
  3. Let the AI Model generate a response and/or updated state
  4. Store the updated memory after the model step
This pattern gives you predictable behavior while keeping memory easy to reason about.