Configure dynamic pipelines with parameters
Parameters in Ascend provide a powerful mechanism for dynamically configuring your data Flows and Components. Rather than hardcoding values, parameters allow you to create flexible, reusable, and maintainable data pipelines that can adapt to different environments and scenarios.
What you'll learn
In this tutorial, you'll learn:
- How parameters work within the Ascend platform
- The hierarchy and precedence of Parameter settings
- How to define and use parameters in various Component types
- Best practices for securely managing parameters
- Common Parameter use cases and examples
Hierarchy of parameters
parameters can be set at multiple levels in Ascend, with more specific settings overriding more general ones:
-
Flow run parameters (highest priority)
- Applied to a specific execution of a Flow
- Ideal for one-time configuration changes
-
Flow parameters
- Defined within a Flow definition
- Apply to all runs of that Flow unless overridden
-
Profile parameters
- Defined in Profiles to be used across multiple Flows
- Great for Environment-specific configurations
-
Project parameters (lowest priority)
- Applied globally across your Project
- Used for organization-wide settings
This hierarchy allows you to set default values at higher levels while providing the flexibility to override them for specific scenarios when needed.
How parameters work
Parameters in Ascend are stored in Vaults and can be referenced using the ${...} notation. This approach provides both security and convenience:
- Secure storage: Sensitive information is kept in centralized, access-controlled Vaults
- Dynamic references: Use the
${parameters.<param_name>}syntax to reference parameters - Real-time resolution: Parameter values are resolved at runtime, allowing for flexibility
Examples
Let's explore how to use parameters in different Component types:
YAML Read Component (e.g., S3, ABFS, Database)
You can define parameters at the Flow or Component level and reference them in your YAML using ${parameters.<param_name>}:
component:
read:
Connection: read_gcs_lake
gcs:
path: ${parameters.gcs_lake_path}
include:
- glob: "*/month=*/day=*/*.parquet"
This allows you to easily change the input path without modifying the Component definition.
YAML Write Component
Parameters can be used for output paths, table names, or other configuration details:
component:
write:
Connection: write_s3
input:
name: my_component
flow: my_flow
s3:
path: { $parameters.s3_path }
formatter: csv
manifest:
name: _ascend_manifest.json
YAML Alias or External Table Component
Parameters can define locations or other properties for alias Components:
component:
alias:
location: "${parameters.database}.${parameters.schema}.MY_TABLE_NAME"
Python Read Component
In Python Components, parameters are accessed via context.parameters:
import pandas as pd
from ascend.application.context import ComponentExecutionContext
from ascend.resources import read
@read()
def read_guides(context: ComponentExecutionContext) -> pd.DataFrame:
df = pd.read_csv(context.parameters["guides_path"])
return df
Python Transform Component
Parameters are accessed the same way in Transforms:
Python Task Component
Task Components also access parameters via context.parameters:
Use cases
Parameters are particularly valuable for:
-
Environment configuration
- Different database Connections for dev, test, and production
- Varying resource allocations based on environment needs
-
Path management
- Dynamic input/output locations
- Date-based or version-based path construction
-
Runtime behavior control
- Feature flags for enabling/disabling functionality
- Processing thresholds or limits
-
Secure credential management
- Database credentials
- API keys and tokens
Next steps
Now that you understand how parameters work in Ascend, you can apply this knowledge to create more flexible and reusable data pipelines.