Agents don’t just analyze and observe—they take action. From creating pull requests and posting comments to sending Slack notifications and setting status checks, agents can perform a wide range of operations based on what they discover. This guide covers all available actions, notification strategies, and best practices for reliable automation.
Agents determine which actions to take based on your prompt instructions and the tools available to them. You describe desired outcomes in natural language, and the agent selects and executes the appropriate tools to accomplish those goals.
Think of actions as the “then” in “if-then” logic. Your prompt defines the conditions and the corresponding actions.
prompt: | Review this PR for breaking changes. If breaking changes found: - Leave a detailed PR comment - Set status check to failure - Send Slack alert to #data-alerts If no breaking changes: - Set status check to success - Post brief confirmation comment
The agent evaluates the condition, determines the appropriate branch, and executes the specified actions.
prompt: | If code quality issues found, leave a comment with: - Summary of issues by severity - Specific file and line numbers - Suggested fixes with code examples - Links to style guide sections
Comment formatting:
Supports full Markdown formatting
Can include code blocks, tables, lists
Supports collapsible sections
Can mention users or teams
Use structured formatting for readability. Group related issues and provide actionable feedback.
Set commit status checks that can block PR merging.
Copy
Ask AI
prompt: | Run compliance checks on mart models. If violations found: - Set status to "failure" - Context: "Compliance Check" - Description: "Found {count} violations. Review required." If no violations: - Set status to "success" - Context: "Compliance Check" - Description: "All compliance checks passed"
Status levels:
success - Check passed, allow merge
failure - Check failed, block merge (if required)
pending - Check in progress
error - Check encountered an error
Status checks integrate with branch protection rules to enforce quality gates.
Reply to existing Slack threads for context continuity.
Copy
Ask AI
prompt: | This is a follow-up to yesterday's drift alert. Send a threaded response to the original message with: - Current status update - Whether issue is resolved - Any remaining actions needed
Useful for ongoing investigations or multi-day processes.
Modify existing files with search-and-replace operations.
Copy
Ask AI
prompt: | Update model YAML files with column descriptions. For each model: - Add missing column descriptions - Update stale descriptions - Preserve existing documentation
prompt: | Check for breaking changes. If breaking changes found: - Comment with details - Set status to failure - Send Slack alert If no breaking changes: - Set status to success - Post brief confirmation
Multi-branch conditionals
Handle multiple scenarios with different actions:
Copy
Ask AI
prompt: | Analyze schema changes. If new columns added: - Update staging models - Create PR - Notify #data-engineering If columns removed: - Do NOT update automatically (too risky) - Comment with impact analysis - Request manual review - Notify #data-leads If columns renamed: - Create draft PR with suggested changes - Comment with downstream impact - Request approval before auto-merge If no changes: - Do nothing
Threshold-based actions
Take different actions based on severity or magnitude:
Copy
Ask AI
prompt: | Measure documentation coverage. If coverage < 50%: - Create high-priority issue - Send Slack alert to #data-leads - Block PR merge If coverage 50-80%: - Leave PR comment suggesting improvements - Don't block merge If coverage > 80%: - Post positive comment - Set status to success
prompt: | Update documentation workflow: 1. Profile all changed models using retrieve_metadata 2. Generate documentation based on profiles 3. Update YAML files with new descriptions 4. Run dbt parse to validate changes 5. If validation passes: - Commit changes - Create PR - Post summary comment 6. If validation fails: - Revert changes - Comment with error details
Progressive enhancement
Build up changes through multiple stages:
Copy
Ask AI
prompt: | Improve model documentation progressively: Stage 1: Add basic descriptions - Infer purpose from model name and location - Add simple descriptions to undocumented models Stage 2: Add column details - Profile each column - Add data types and null rates - Note common values for categoricals Stage 3: Add relationships - Identify joins and dependencies - Document relationships to other models - Add lineage information After all stages: - Run comprehensive validation - Create PR with all improvements
Pause execution for human review before sensitive actions.
Copy
Ask AI
restrictions: approval: require_approval_for: - pr_creation - critical_file_modification approvers: teams: ["data-platform-leads"] approval_timeout_hours: 24 notification_channel: "#agent-approvals"prompt: | If breaking changes detected: 1. Analyze impact on downstream dependencies 2. Generate remediation plan 3. Request approval to create PR with fixes 4. Wait for approval (max 24 hours) 5. If approved: create and merge PR 6. If denied: comment with explanation and close 7. If timeout: escalate to #data-platform-leads
Approval process:
Agent pauses at approval point
Notification sent to approvers
Approvers review in web interface
Approval or denial recorded
Agent resumes or aborts accordingly
Approvers see full context: what changed, why, and what action is proposed.
prompt: | Monitor for critical test failures. If severity = "critical": - Send Slack to #data-alerts mentioning @oncall - Set PR status to failure - Block merge - Create high-priority issue - Send email to [email protected] Include in notification: - Which tests failed - Failure rate and trend - Last successful run - Likely causes - Runbook link
prompt: | Update documentation for changed models. If updates successful: - Create PR (no notifications) - Use default reviewers - Auto-merge if checks pass If updates fail: - Comment on PR with failure details - Send Slack alert to #data-platform - Don't block merge (documentation is nice-to-have) - Create issue for manual follow-up
Best for:
Background maintenance
Non-critical automation
High-frequency agents
Self-healing workflows
This reduces notification noise while ensuring visibility when human intervention is needed.
Don’t assume the agent knows your preferences:Vague:
Copy
Ask AI
prompt: Let me know if there are issues
Specific:
Copy
Ask AI
prompt: | If issues found: - Comment on PR with file:line references - Include code examples of correct syntax - Link to relevant style guide section - Suggest auto-fixable changes
Specify formatting and structure
Tell agents how to present information:
Copy
Ask AI
prompt: | Create a summary comment with this structure: ## Analysis Results **Status:** {{ status_emoji }} {{ status_text }} ### Metrics | Metric | Value | Change | |--------|-------|--------| {{ metrics_table }} ### Issues Found {{ if issues }} 1. **{{ issue_name }}** (Severity: {{ severity }}) - Location: `{{ file }}:{{ line }}` - Description: {{ description }} - Fix: {{ suggested_fix }} {{ end for }} {{ else }} No issues detected ✓ {{ end if }} <details> <summary>Full Details</summary> {{ detailed_analysis }} </details>
Handle all possible outcomes
Cover success, failure, and edge cases:
Copy
Ask AI
prompt: | Validate model changes. If validation passes: - Set status to success - Post confirmation comment - No Slack notification If validation fails: - Set status to failure with specific errors - Comment with failed checks and fixes - Send Slack to #data-platform If unable to validate (missing dependencies, etc.): - Set status to "error" - Comment explaining what's missing - Don't block merge (validation issue, not code issue) - Create issue for infrastructure team