The Responsible AI Violations Feed helps you track when AI behavior violates internal policies or ethical guidelines. It provides visibility into the type of data involved, policy breaches, and the agents and projects associated with those violations—helping ensure AI is used responsibly across the platform.


Key Metrics in the Feed

Each row in the feed includes:

  • Policy – The specific policy that was violated.
  • Finding Type – The category of information or behavior that caused the violation.
  • Finding – The actual content or trigger that led to the violation.
  • Source – The origin of the request or activity that caused the issue.
  • Agent – The AI agent involved in the violation.
  • Project – The associated team, initiative, or workspace.
  • Confidence – The system’s confidence in detecting this as a valid violation.

Violation Details

Click on a row to access detailed insights:

  • Violation Details – Includes policy, type, finding, and confidence score.
  • Returned Chunks – Shows what content was retrieved during the interaction, with violations clearly highlighted in red.

This view is an important audit tool for reviewing how Responsible AI standards are upheld across interactions.


Filter for Targeted Analysis

Use filters to refine the feed:

  • Date – Focus on violations within a specific timeframe.
  • Project – Narrow results by department, initiative, or workspace.