Skip to main content

Automate your Flows

In this guide, you'll learn how to use Ascend's Automation framework to trigger Flows in response to real-time system Events and Sensors. You'll create responsive, intelligent pipelines that run precisely when they're needed.

What you'll learn

  • How to create Automations using both the UI and file-based approaches
  • How to configure different types of triggers (Sensors and Events)
  • How to implement different Action types
  • How to build custom Python Automations

Prerequisites

  • At least one Ascend Flow
  • Deployment (Automations need to be deployed to run)
  • Python knowledge (if building custom Sensors or Actions)

Automation structure

Every Automation follows this structure:

automation:
name: <automation_name>
enabled: true # Set to false to disable
triggers:
sensors: # Time or condition-based triggers
- ...
events: # Event-based triggers
- ...
actions: # What to do when triggered
- ...

At least one trigger (Sensor or Event) must be specified.

Sensors

Sensors are time-based or condition-based triggers.

Timer Sensor

Triggers on a schedule using cron expressions:

sensors:
- type: timer
name: hourly-trigger
config:
schedule:
cron: '0 * * * *' # Every hour at minute 0

Common cron patterns

PatternSchedule
'0 * * * *'Every hour
'*/15 * * * *'Every 15 minutes
'0 0 * * *'Daily at midnight
'0 6 * * *'Daily at 6 AM
'0 0 * * 1'Weekly on Monday at midnight
'0 0 1 * *'Monthly on the 1st at midnight
'0 0 * * 1-5'Weekdays at midnight

Events

Event triggers activate Automations based on system events.

Event types

Event typeDescription
FlowRunSuccessFlow completed successfully
FlowRunErrorFlow failed
FlowRunStartFlow started
ComponentRunSuccessComponent completed
ComponentRunFailureComponent failed

Event filters

Filter events using SQL or Python expressions.

SQL filter

events:
- types:
- FlowRunSuccess
sql_filter: json_extract_string(event, '$.data.flow') = 'my-flow-name'

Python filter

events:
- types:
- FlowRunSuccess
python_filter: |
import json
event_data = json.loads(event)
return event_data.get('data', {}).get('flow') == 'my-flow-name'

Multiple event types (no filter)

events:
- types:
- FlowRunSuccess
- FlowRunError

SQL filter functions

  • json_extract_string(event, '$.path.to.field'): Extract string values
  • json_extract(event, '$.path.to.field'): Extract any JSON value
  • Standard SQL operators: =, !=, >, <, AND, OR, IN, LIKE

Actions

Actions define what happens when an Automation triggers.

Run Flow action

Execute a Flow:

actions:
- type: run_flow
name: run-my-flow
config:
flow: my-flow-name

Run Otto action

Execute Otto with a specific prompt:

actions:
- type: run_otto
name: analyze-failure
config:
agent_name: failure-analyzer # Optional: specific Otto agent
prompt: |
Analyze the recent flow failure and identify the root cause.
Check recent git commits and suggest fixes.

Email alert action

Send email notifications:

actions:
- type: email_alert
name: notify-on-failure
config:
emails:
- team@company.com
- alerts@company.com
include_otto_summary: true # Include Otto's explanation

Configuration options:

  • emails (required): List of email addresses
  • include_otto_summary (optional): Include Otto's analysis of the event (default: false)
  • agent_name (optional): Specific Otto agent to use for explanation
  • prompt (optional): Additional prompt for Otto when generating explanation

Function action

Execute a custom Python function:

actions:
- type: function
name: custom-handler
config:
python:
entrypoint: src.automations.my_automation.handle_event

Multiple actions

Execute multiple actions sequentially:

actions:
- type: run_flow
name: run-primary-flow
config:
flow: primary-flow
- type: email_alert
name: notify-team
config:
emails:
- team@company.com
- type: run_flow
name: run-secondary-flow
config:
flow: secondary-flow

Create your Automation

Using the Files panel

  1. Navigate to the Files panel
  2. Right-click on the Automations folder and select New file
  3. Name your Automation (e.g., hourly-etl.yaml)
  4. Configure your Automation using the patterns below

Common patterns

Scheduled Flow execution

Run a Flow on a regular schedule:

automations/hourly-etl.yaml
automation:
name: hourly-etl
enabled: true
triggers:
sensors:
- type: timer
name: hourly-timer
config:
schedule:
cron: '0 * * * *'
actions:
- type: run_flow
name: run-etl
config:
flow: etl-pipeline

Event-driven pipeline

Trigger downstream Flows when upstream completes:

automations/cascade-pipeline.yaml
automation:
name: cascade-pipeline
enabled: true
triggers:
events:
- types:
- FlowRunSuccess
sql_filter: json_extract_string(event, '$.data.flow') = 'extract-data'
actions:
- type: run_flow
name: run-transform
config:
flow: transform-data

Failure notification

Send email alerts with Otto analysis when a Flow fails:

automations/failure-alerts.yaml
automation:
name: failure-alerts
enabled: true
triggers:
events:
- types:
- FlowRunError
sql_filter: json_extract_string(event, '$.data.flow') = 'critical-flow'
actions:
- type: email_alert
name: notify-team
config:
emails:
- data-team@company.com
- oncall@company.com
include_otto_summary: true

Multi-Flow orchestration

Trigger multiple downstream Flows from a single upstream event:

automations/fan-out.yaml
automation:
name: fan-out-pipeline
enabled: true
triggers:
events:
- types:
- FlowRunSuccess
sql_filter: json_extract_string(event, '$.data.flow') = 'source-data'
actions:
- type: run_flow
name: run-analytics
config:
flow: analytics-flow
- type: run_flow
name: run-reporting
config:
flow: reporting-flow
- type: run_flow
name: run-ml
config:
flow: ml-training-flow

All failures notification

Trigger on any Flow failure without specifying individual Flows:

automations/all-failures.yaml
automation:
name: all-failures-alert
enabled: true
triggers:
events:
- types:
- FlowRunError
actions:
- type: email_alert
name: notify-on-any-failure
config:
emails:
- alerts@company.com
include_otto_summary: true

Python Automations

For complex logic, define Automations programmatically using Python decorators.

Basic Python Automation

automations/custom_automation.py
from typing import AsyncIterator
import asyncio

from ascend.application.automation import Automation
from ascend.application.automation.action import ActionContext
from ascend.application.automation.sensor import SensorContext
from ascend.common.events.base import EventData
from ascend.common.events.event_types import FlowRunSuccess, ScheduleFlowRun
from ascend.common.events.manager import fire_event

# Create automation instance
my_automation = Automation()

# Define sensor
@my_automation.sensor(config={"flow": "hourly-process"})
async def periodic_trigger(context: SensorContext) -> AsyncIterator[EventData]:
"""Trigger flow periodically."""
while True:
yield ScheduleFlowRun(flow=context.sensor.config["flow"])
await asyncio.sleep(3600) # Wait 1 hour

# Define action
@my_automation.action()
async def process_completion(context: ActionContext) -> None:
"""Handle flow completion events."""
flow = context.event.data.flow

# Track completion counts in state
flow_counts = context.state.setdefault("flow_counts", {})
flow_counts[flow] = flow_counts.get(flow, 0) + 1

# Access parameters
threshold = context.parameters.get("completion_threshold", 10)
if flow_counts[flow] >= threshold:
fire_event(ScheduleFlowRun(flow="analysis-flow"))

# Register automation
my_automation()

Mixed declarative and programmatic

Combine YAML triggers with Python actions:

automations/monitoring.py
from ascend.application.automation import Automation
from ascend.application.automation.action import ActionContext
from ascend.common.events.event_types import FlowRunSuccess, FlowRunError

monitoring = Automation()

# Add event trigger declaratively
monitoring.add_event(
types=[FlowRunSuccess, FlowRunError],
sql_filter="json_extract_string(event, '$.data.flow') IN ('etl-flow', 'ml-flow')"
)

# Add declarative action
monitoring.action(
name="run-alert-flow",
type="run_flow",
config={"flow": "alert-flow"}
)

# Add programmatic action
@monitoring.action()
async def track_metrics(context: ActionContext) -> None:
"""Custom metrics tracking."""
if isinstance(context.event.data, FlowRunError):
# Log error metrics
error_msg = context.event.data.error.msg
context.state["last_error"] = error_msg

monitoring()

Best practices

  1. Use descriptive names: Name Automations to indicate their purpose
  2. Be specific with filters: Use SQL filters to avoid unintended triggers
  3. Use enabled: false for testing: Disable Automations during development
  4. Separate concerns: Create separate Automations for success and failure scenarios
  5. Include Otto summaries: Use include_otto_summary: true for failure alerts to get AI-powered analysis

Next steps