Agent Recipe 1 — Daily Pacing Monitor
Task
Detect pacing and KPI issues early during active campaigns.
When to use
Daily (or twice per day) for active campaigns.
Inputs
-
Campaign table for today / yesterday / last 7 days
— Campaign name
— Spend
— Impressions
— Clicks
— Installs/Conversion
— CPA/CPI
— ROAS (if available)
— Country
— Channel
-
KPI targets:
— Target CPA/ROAS
— Daily budget
-
Alert thresholds:
— Overspend >15%
— CPA drift >20%
— Volume drop >25%
Expected output
-
Executive summary
-
Priority issues
-
Opportunities
-
Recommended actions with reasons
-
Data warnings
QA check
-
Is attribution lag affecting results?
-
Is volume large enough to trust the signal?
-
Is the agent using the right KPI (not vanity metrics)?
Next action
Approve 1-2 actions and log them in the daily optimization notes.
Prompt (copy-paste)
Agent Recipe 2 - Creative Research Agent
Task
Generate testable creative angles for a new UA test sprint.
When to use
Before launching new creative tests or when performance starts to slow.
Inputs
-
Product/ app description
-
Audience profile
-
GEOS
-
Campaign goal (install / trial / purchase / deposit)
-
Top-performing creatives (if available)
-
Weak creatives (if available)
-
Notes: what worked / failed before
Expected output
-
12 creative angles
-
4 angle buckets
-
Hook lines
-
Suggested formats
-
Risk notes
QA check
-
Are the ideas specific to the product and audience?
-
Are they varied (not the same angle repeated)?
-
Any direct copying or generic filler?
Next action
Select top 3-5 angles and pass them into creative production.
Prompt (copy-paste)
Agent Recipe 3 – Creative Test Planner Agent
Task
Turn creative ideas into a structured test matrix.
When to use
Once you have angles/ideas but need a clean testing plan.
Inputs
-
10-12 creative ideas
-
Budget range
-
Channel (Meta / TikTok / etc.)
-
KPI target
-
Test duration
-
Constraints (creative volume, team bandwidth, launch dates)
Expected output
-
Test matrix
-
Decision criteria (stop/ iterate/ scale)
-
Priorities and timeline
-
Top risks
QA check
-
Are too many variables mixed in one test?
-
Are success criteria clearly defined?
-
Is the KPI realistic for the channel and budget?
Next action
Approve the sprint test plan and assign owners.
Prompt (copy-paste)
Agent Recipe 4 - Trafficking QA Agent
Task
Check campaign setup before launch (naming, UTM, tracking, event mapping).
When to use
Before every launch, especially during scale when setup errors get expensive.
Inputs
-
Naming convention rules
-
UTM template
-
Tracking links
-
Event mapping
-
Campaign launch list
-
Channel requirements (if needed)
Expected output
-
QA summary
-
Critical/ Medium/ low issues
-
Fix list
-
Launch readiness status
QA check
-
Are event names and mappings correct?
-
Are all UTMs and tracking params present?
-
Any legacy templates or broken links?
Next action
Fix critical issues before launch approval.
Prompt (copy-paste)
Agent Recipe 5 - Weekly Reporting Summary Agent
Task
Create a clear, decision-ready weekly performance summary.
When to use
Weekly team reporting, leadership updates, client-facing summaries.
Inputs
-
MMP export
-
Ad platform exports
-
KPI definitions
-
Current week + previous week
-
Business objective (efficiency / scale / testing)
Expected output
-
Weekly snapshot
-
Top wins
-
Top issues
-
Opportunities
-
Recommended next steps
QA check
-
Are there conflicts across sources?
-
Is the summary actionable (not just descriptive)?
-
Are the KPI definitions consistent?
Next action
Send to team/leadership and use it in the next optimization meeting.
Prompt (copy-paste)
Agent Recipe 6 - Fraud / Traffic Quality Triage Agent
Task
Flag suspicious traffic-quality patterns for faster investigation.
When to use
When testing new sources, scaling, or investigating abnormal performance.
Inputs
-
Source-level performance data
-
CTR/CVR/CTIT
-
Geo/placement breakdown
-
Conversion quality signals
-
Benchmarks (if available)
Expected output
-
Suspected issues
-
Evidence by source
-
Confidence labels
-
Recommended next steps
-
Missing data list
QA check
-
Is there enough evidence?
-
Is low-volume noise being overinterpreted?
-
Are conclusions benchmarked against normal patterns?
Next action
Escalate to analyst / AM for validation and action.
Prompt (copy-paste)
Agent Recipe 7 - Budget Reallocation
Suggestion Agent
Task
Suggest conservative budget shifts based on performance.
When to use
During optimization cycles when you need faster budget recommendations.
Inputs
-
Campaign performance data
-
Budget limits
-
KPI targets
-
Business priority (efficiency vs volume)
-
Constraints (minimum spends, partner commitments)
Expected output
-
Reallocation suggestions
-
Reasoning with metrics
-
Expected impact
-
Risks/trade-offs
-
Validation checklist before action
QA check
-
Is there enough volume to justify a change?
-
Are downstream quality signals considered?
-
Is seasonality or promo context missing?
Next action
Approve small shifts first, then re-check performance.
Prompt (copy-paste)
Agent Recipe 8 - Post-Test Insights Agent
Task
Turn test results into reusable learnings for the next sprint.
When to use
After each test cycle (creative tests, audience tests, offer tests).
Inputs
-
Test matrix
-
Results by test cell
-
KPI outcomes
-
Team notes (optional)
Expected output
-
Outcome summary
-
Key learnings
-
Reusable patterns
-
Next hypotheses
-
Confidence notes
QA check
-
Are conclusions too strong for the sample size?
-
Are “winners" actually statistically meaningful enough for your process?
-
Are next hypotheses practical?
Next action
Use the output in sprint planning and update your learnings library.
Prompt (copy-paste)
Starter Setup Checklist
Before using any Al agent, make sure these basics are in place.
1. KPI Rules
-
Primary KPI is defined
-
Thresholds for alerts/actions are set
-
Evaluation window is clear
2. Data Rules
-
Source of truth is defined
-
Naming/taxonomy is standardized
-
Reporting cadence is clear
3. Guardrails
-
What Al can do is defined (monitor / summarize / suggest)
-
What Al cannot do is defined (launch / pause / move budget)
-
Approval owner is assigned
4. QA Process
-
Human reviewer is assigned
-
QA checklist exists
-
Outputs are logged for review
Prompt Adaptation Guide
Use these placeholders to adapt prompts to your stack:
-
[KPI] → CPA/ROAS/ volume/ retention proxy
-
[Channel] → Meta / TikTok/ Google/DSP/etc.
-
[Geo] → country/region/ tier
-
[Attribution window]→ 1d/7d/ etc.
-
[Risk threshold] → CPA drift %, overspend %, volume drop %
-
[Goal] → install/ trial/ purchase/ deposit
Tip: Keep your prompts consistent across the team.
Small prompt differences create inconsistent outputs.
What Not to Automate Yet
Do not automate these without strong controls and mature QA:
• Final budget approvals
• Source blocking decisions
• Tracking changes
• Campaign launch/pause actions
• Any execution based on incomplete data
Final Takeaway
The fastest wins come from repeatable workflows, not full autonomy. Start with one of these first:
1. Daily Pacing Monitor
2. Trafficking QA Agent
3. Weekly Reporting Summary Agent
Deploy one workflow, validate it, then expand.
If your team is building these workflows around real campaign execution - especially targeted in-app advertising and performance optimization - BidMatrix can help adapt them to your UA setup. Send us an email to bd@bid-matrix.com