Responsible AI Violations Feed
Monitoring Responsible AI Violations provides valuable insights into how your users hand data across the business, and potential areas for improvement. The Responsible AI Violations Feed is designed to deliver details around sensitive information being shared through the Airia platform, including key information about the finding type, which policy was violated, the connected project and agent.
Key Metrics in the Responsible AI Violations Feed
The Responsible AI Violations Feed provides a variety of critical metrics, allowing users to track and analyze activity:
- Policy: Indicates which policy was violated when the information was entered into Airia. Knowing which policy helps assess the awareness and understanding of how users handle the related data within the system.
- Finding Type: Specifies the type of information that is being used within the agent which violated the policy. This helps identify which information was used in the execution, which is useful for identifying data handling training requirements.
- Finding: Identifies the actual information that was used and surfaces where this appears in the data.
- Source: Shows the origin of the request or the trigger that initiated the Responsible AI Violations, giving context to the agent's activity.
- Agent: Displays the specific agent that was used.
- Project: Displays the project name to identify which initiative or department the violation originated from.
- Confidence: Reflects how accurately the platform has identified this type of data during the execution.
Responsible AI Violation Details
Users can select a row within the Responsible AI Violations feed to get further details on the violation. The details provide insight into the results, highlighting:
- Violation Details: Provides key information on the violation such as Policy, Type, Finding and Confidence score.
- Returned Chunks: Provides insight into the chucks returned that the model used with the violations highlighted in red text.
This information can be used as an important audit tool to review any Responsible AI Violations for data security management.
Filtering Options
To enhance the usefulness of the data, the Responsible AI Violations Feed allows for filtering based on:
- Date: Narrow down the report to specific timeframes to analyze agent performance over time or to track spikes in execution activities.
- Project: Filter the feed to focus on specific projects, providing a more detailed view of Responsible AI Violations within a certain context.
These filtering options allow for targeted analysis, helping you to drill down into specific aspects of Responsible AI violations and optimize where necessary.
Report Refresh and Export Features
The Responsible AI Violations Feed can be refreshed at any time to ensure that the latest Responsible AI Violations data is always available. This allows for real-time monitoring and quick identification of Responsible AI violations.
Additionally, the report can be exported to CSV for offline review or sharing. This feature is valuable for teams needing to analyze Responsible AI Violations data outside of the platform or for reporting purposes. The CSV export provides a straightforward way to track long-term trends or integrate the data with other tools for deeper analysis.