Use this file to discover all available pages before exploring further.
Building an agent that runs reliably in production requires more than just connecting nodes. The practices below are distilled from the patterns that consistently produce the highest-quality agent outputs across WisdomAI. They cover the decisions that most affect output quality: how you write instructions, set trigger conditions, choose a schedule, and format the output. Apply them when building a new agent and when debugging one that isn’t behaving as expected.
The instruction is the most consequential part of any node. A vague instruction produces inconsistent results. A specific one produces reliable output.Every instruction should cover three things:
What data or entities to look at.
What conditions or thresholds matter.
What the output should include and how it should be formatted.
"Fetch all issues labeled ‘urgent’ created in the last 14 days. Exclude issues with status ‘Closed’ or ‘Duplicate’. Group by assignee. For each assignee include: issue ID, title, creation date, and current status. Sort each group by creation date descending."
"Analyze the data for trends"
"Compare this period’s metrics against the prior period. Flag any category where the value changed by more than 20%. For flagged categories, note whether it is an increase or decrease and compute the exact percentage change. If no category exceeds the threshold, state that all metrics are stable."
"Search the web for competitor info"
"Search for recent press releases and news articles about [competitor] from the past 30 days. Focus on product launches, partnerships, and pricing changes. Organize findings by topic with source URLs.”
Each node’s instruction must make sense on its own. Do not reference other nodes by name. Describe the input data the node should expect instead: Given the list of assignees from the incoming issues, fetch their last 10 resolved tickets.Tell each node what to produce, not how to produce it. The node decides the implementation.
Procedural (avoid)
Outcome-focused (preferred)
“SELECT * FROM bugs WHERE severity = ‘critical’ AND created_at > NOW() - INTERVAL 7 DAY"
"Fetch all critical bugs from the last 7 days, grouped by component"
"Use the EVALUATE_CRITERIA tool to check if error count exceeds threshold"
"Check if the total number of errors exceeds 10”
Assign one responsibility per node. If an instruction covers two unrelated concerns, split it into two nodes.
Trigger criteria determine when your agent acts versus staying silent. Vague criteria produce unpredictable behavior. Concrete thresholds produce consistent ones.
Vague
Concrete
”If sales are low"
"If daily revenue drops by more than 10% compared to the previous 7-day average"
"If there are problems"
"If the error rate exceeds 5% or if any orders are stuck in pending status for more than 48 hours"
"If something changed"
"If revenue falls outside the 4% monthly variance range”
Start with a single condition. Once you confirm it fires correctly, add more. A multi-condition trigger that has never been tested independently is significantly harder to debug.
Use trigger criteria when you only want output on specific conditions: anomaly detection, threshold breaches, or service level agreement violations. Leave the criteria field empty for regular reporting, such as daily summaries, weekly performance reviews, and monthly trend analysis, so the agent always delivers output when it runs.
Match the schedule to how often your data changes and how quickly you need to act on it.
Frequency
When to use
Examples
Hourly
Urgent, real-time monitoring only
Production error rate spikes, critical SLA breaches
Daily
Most agents (recommended default)
Sales metrics, pipeline health, support ticket volume
Weekly
Trend analysis and summaries
Week-over-week performance, team productivity
Monthly
High-level reviews and strategic reporting
Monthly business review, budget tracking
A daily schedule is the right default for most agents. Hourly runs add noise and make triggers harder to tune unless you genuinely need to react within the hour.
Telling the agent what to analyze without telling it how to present results produces inconsistent outputs. Be explicit about structure in every instruction that generates an output.
Name your section headings: “Section 1: Overview”, “Section 2: Details”
Define number formats: “Show currency as $X,XXX” and “Show percentages as X.X%”
Handle conditional sections: “Only include the Incidents section if incidents occurred”
Set volume limits: “Limit to top 10 entries ranked by largest change”
Always state what the output should say when there is nothing to report. If you don’t, the agent will generate something on its own, and it may not match what you expect.
If data is needed on both sides of an If/Else node, connect it through the node. Do not wire around it. Bypassing the node causes both branches to activate regardless of which condition is met.Use concrete values in every If/Else condition.
Vague
Concrete
”There are many errors"
"The total count of items with status ‘error’ is greater than 10"
Connect Data and Analysis nodes directly to the Artifact Builder when the output requires tables or charts. Routing everything through a Summary node first strips the structured data the Artifact Builder needs.Split the Artifact Builder instruction into two parts:
Organization: How to group the data, which tables or lists to create, and what to include in each section.
Format: How the final output should look: section headings, column names, number formats, and conditional sections.
Run a preview before publishing any agent. Click Preview at any point during building, not just when the workflow is complete, to check what each node is producing and catch problems before they compound.