name: data_quality_handler
description: Investigate and respond to data quality alerts
prompt: |
A data quality alert has been triggered.
## Alert Details
**Monitor**: {{monitor.name}}
**Type**: {{monitor.type}}
**Severity**: {{run.severity}}
**Reason**: {{run.reason}}
## 1. Understand the issue
{{#if (eq monitor.type "schema_drift")}}
Schema changes detected:
{{#each details.changes.added}}
- ADDED: {{this.path}} ({{this.type}})
{{/each}}
{{#each details.changes.deleted}}
- DELETED: {{this.path}}
{{/each}}
{{#each details.changes.modified}}
- MODIFIED: {{this.path}} ({{this.before}} → {{this.after}})
{{/each}}
{{/if}}
{{#if (eq monitor.type "freshness")}}
Data staleness detected:
- Table: {{details.table}}
- Last update: {{details.last_updated}}
- Expected: within {{details.threshold}}
- Actual age: {{details.actual_age}}
{{/if}}
{{#if (eq monitor.type "volume_anomaly")}}
Volume anomaly detected:
- Table: {{details.table}}
- Expected rows: {{details.expected_count}}
- Actual rows: {{details.actual_count}}
- Change: {{details.percent_change}}%
{{/if}}
## 2. Determine if expected
Check if this change is expected:
- Look for recent PRs or deployments
- Check for scheduled migrations
- Review recent pipeline runs
## 3. Assess downstream impact
If the change is unexpected:
- Search for models that depend on the affected table
- Identify dashboards or reports that may be affected
- Check for scheduled jobs that consume this data
## 4. Take action based on severity
**HIGH severity:**
- Send Slack alert to #data-incidents immediately
- Create GitHub issue for tracking
- Tag on-call engineer if outside business hours
**MEDIUM severity:**
- Send Slack alert to #data-pipeline
- Add to daily standup agenda
## 5. Generate summary
Post to Slack with:
- What was detected
- Whether it appears expected or unexpected
- Downstream impact assessment
- Recommended next steps
triggers:
- event: monitor
types: ['schema_drift', 'freshness', 'volume_anomaly', 'json_drift']
severity: ['medium', 'high']
tools:
include:
- slack_tool