Skip to main content
This agent automatically monitors your warehouse for expensive queries, detects cost anomalies, and alerts your team before your bill explodes. It identifies runaway queries, inefficient patterns like full table scans or Cartesian joins, and sends detailed alerts with optimization recommendations to help you control costs proactively.

Simple Example

name: cost-monitor
description: Alert on expensive queries

triggers:
  - type: scheduled
    cron: "0 9 * * *"  # Daily at 9 AM
    timezone: "America/New_York"

tools:
  preset: safe  # Read-only

prompt: |
  Find yesterday's most expensive queries:
  
  SELECT 
    query_text,
    total_elapsed_time/1000 as seconds,
    bytes_scanned/1e12 as TB_scanned
  FROM snowflake.account_usage.query_history
  WHERE start_time >= DATEADD('day', -1, CURRENT_DATE())
  ORDER BY bytes_scanned DESC
  LIMIT 10
  
  Send Slack report to #data-costs with top 10.

More Robust Example

Production-ready with anomaly detection and real-time monitoring:
name: warehouse-cost-monitor
description: Monitor and prevent warehouse cost explosions

triggers:
  # Daily report
  - type: scheduled
    cron: "0 9 * * *"
    timezone: "America/New_York"
  
  # Real-time monitoring (every 15 min during business hours)
  - type: scheduled
    cron: "*/15 9-17 * * MON-FRI"
    timezone: "America/New_York"

tools:
  preset: safe

context:
  # Cost thresholds
  thresholds:
    query_cost_warning: 10  # $10 per query
    query_cost_critical: 50  # $50 per query
    daily_budget: 500  # $500/day
    tb_scanned_warning: 1  # 1 TB
    
  # Credit cost (Snowflake example)
  credit_cost_usd: 2  # $2 per credit

prompt: |
  Monitor warehouse costs and detect anomalies.
  
  ## 1. Get recent query costs
  
  ```sql
  SELECT 
    query_id,
    query_text,
    user_name,
    warehouse_name,
    start_time,
    total_elapsed_time/1000 as seconds,
    bytes_scanned/1e12 as tb_scanned,
    credits_used_cloud_services,
    (credits_used_cloud_services * 2) as estimated_cost_usd  -- $2 per credit
  FROM snowflake.account_usage.query_history
  WHERE start_time >= DATEADD('hour', -1, CURRENT_TIMESTAMP())
  ORDER BY estimated_cost_usd DESC
  LIMIT 100

2. Identify expensive queries

For each query, classify: CRITICAL (> $50):
  • Single query cost > $50
  • Immediate action needed
WARNING (> $10):
  • Single query cost > $10
  • Should optimize
INEFFICIENT (large scans):
  • Scanned > 1 TB
  • Might have optimization opportunities

3. Detect patterns

Look for: a) Runaway queries:
  • Same query running repeatedly
  • Check: GROUP BY query_text, COUNT(*)
  • If > 100 times/hour: Likely broken
b) Full table scans:
  • No WHERE clause on large tables
  • Pattern: SELECT * FROM large_table
c) Cartesian joins:
  • JOIN without ON condition
  • Result: Massive row explosion
d) Dashboards gone wild:
  • User queries > 500 times/hour
  • Likely auto-refresh set too aggressive

4. Alert based on severity

For CRITICAL issues: Send Slack to #data-incidents: ”🚨 EXPENSIVE QUERY ALERT Cost: $[cost] ([credits] credits) User: [user] Warehouse: [warehouse] Duration: [seconds]s Data scanned: [tb_scanned] TB Query:
[query_text, truncated to 500 chars]
Action required:
  1. Check if query is necessary
  2. Consider killing: CALL SYSTEM$CANCEL_QUERY(‘[query_id]’)
  3. Optimize query (add filters, use materialized views)
  4. Contact user: [user]
Query ID: [query_id]” For WARNING issues: Send to #data-costs with optimization suggestions. For INEFFICIENT queries: Log for weekly review, don’t spam alerts.

5. Daily cost report (9 AM trigger)

Calculate yesterday’s totals:
SELECT 
  DATE(start_time) as date,
  COUNT(*) as total_queries,
  SUM(credits_used_cloud_services) as total_credits,
  SUM(credits_used_cloud_services * 2) as total_cost_usd,  -- $2 per credit
  SUM(bytes_scanned)/1e12 as total_tb_scanned
FROM snowflake.account_usage.query_history
WHERE start_time >= DATEADD('day', -1, CURRENT_DATE())
  AND start_time < CURRENT_DATE()
Send Slack to #data-costs: ”📊 Daily Cost Report - [date] 💰 Total cost: [totalcost]([totalcredits]credits)Budget:[total_cost] ([total_credits] credits) Budget: [daily_budget] Status: [if over budget: 🔴 OVER BUDGET, else: ✓ Within budget] 📈 Usage:
  • Queries: [total_queries]
  • Data scanned: [total_tb_scanned] TB
  • Avg cost/query: $[avg_cost]
🔥 Top 5 expensive queries: [for each expensive query] [position]. $[cost] - [user] - [truncated_query] 🚀 Top users by cost: [for each top user] [user]: $[cost] 💡 Optimization opportunities:
  • [count_full_scans] queries doing full table scans
  • [count_no_limits] queries without LIMIT clauses
  • [count_repeated] queries that could be cached”

6. Proactive optimization suggestions

For common patterns, suggest fixes: Full table scan detected: “Your query scans 5 TB but returns 10 rows. Consider adding WHERE clause on date column. Current: SELECT * FROM events Optimized: SELECT * FROM events WHERE event_date >= DATEADD(‘day’, -7, CURRENT_DATE())” Dashboard overuse: “User dashboard_viewer ran same query 500 times today. Consider:
  • Increasing refresh interval (5 min → 30 min)
  • Materializing results
  • Using incremental model”
Compare to last week:
  • Cost up/down %
  • New expensive queries
  • Users with cost increases
Alert if week-over-week cost increase > 20%.