Skip to main content
Version: 3.0.0

Incremental Strategy

Incremental Processing Strategy.

IncrementalStrategy

info

IncrementalStrategy is defined beneath the following ancestor nodes in the YAML structure:

Below are the properties for the IncrementalStrategy. Each property links to the specific details section further down in this page.

PropertyDefaultTypeRequiredDescription
incrementalAny of:
  append
  MergeStrategy
YesIncremental processing strategy.
on_schema_changestring ("ignore", "fail", "append_new_columns", "sync_all_columns")
NoPolicy to apply when schema changes are detected. Defaults to 'fail' if not provided.

Property Details

Component

A component is a fundamental building block of a data flow. Types of components that are supported include: read, transform, task, test, and more.

PropertyDefaultTypeRequiredDescription
componentOne of:
  CustomPythonReadComponent
  ApplicationComponent
  AliasedTableComponent
  ExternalTableComponent
YesConfiguration options for the component.

CustomPythonReadComponent

A component that reads data using user-defined custom Python code.

PropertyDefaultTypeRequiredDescription
data_plane  One of:
    SnowflakeDataPlane
    BigQueryDataPlane
    DatabricksDataPlane
NoData Plane-specific configuration options for a component.
skipboolean
NoA boolean flag indicating whether to skip processing for the component or not.
retry_strategyNoThe retry strategy configuration options for the component if any exceptions are encountered.
descriptionstring
NoA brief description of what the model does.
metadataNoMeta information of a resource. In most cases it doesn't affect the system behavior but may be helpful to analyze project resources.
namestringYesThe name of the model
flow_namestring
NoThe name of the flow that the component belongs to.
data_maintenanceNoThe data maintenance configuration options for the component.
testsNoDefines tests to run on the data of this component.
custom_python_readYes

CustomPythonReadOptions

Configuration options for the Custom Python Read component.

PropertyDefaultTypeRequiredDescription
dependenciesarray[None]
NoList of dependencies that must complete before this component runs.
event_timestring
NoTimestamp column in the component output used to represent event time.
strategyfullAny of:
  full
  IncrementalStrategy
  PartitionedStrategy
NoIngest strategy.
pythonAny of:
YesPython code to execute for ingesting data.

TransformComponent

A component that executes SQL or Python code to transform data.

PropertyDefaultTypeRequiredDescription
data_plane  One of:
    SnowflakeDataPlane
    BigQueryDataPlane
    DatabricksDataPlane
NoData Plane-specific configuration options for a component.
skipboolean
NoA boolean flag indicating whether to skip processing for the component or not.
retry_strategyNoThe retry strategy configuration options for the component if any exceptions are encountered.
descriptionstring
NoA brief description of what the model does.
metadataNoMeta information of a resource. In most cases it doesn't affect the system behavior but may be helpful to analyze project resources.
namestringYesThe name of the model
flow_namestring
NoThe name of the flow that the component belongs to.
data_maintenanceNoThe data maintenance configuration options for the component.
testsNoDefines tests to run on the data of this component.
transformOne of:
  SqlTransform
  PythonTransform
  SnowparkTransform
  PySparkTransform
YesThe transform component that executes SQL or Python code to transform data.

PySparkTransform

PySpark transforms execute PySpark code to transform data.

PropertyDefaultTypeRequiredDescription
dependenciesarray[None]
NoList of dependencies that must complete before this component runs.
event_timestring
NoTimestamp column in the component output used to represent event time.
microbatchboolean
NoWhether to process data in microbatches.
batch_sizestring
NoThe size/time granularity of the microbatch to process.
lookback1integerNoThe number of time intervals prior to the current interval (and inclusive of current interval) to process in time-series processing mode.
beginstring
NoThe 'beginning of time' for this component. If provided, time intervals before this time will be skipped in a time-series run.
inputsarray[None]
NoList of input components to use as data sources for the transform.
strategyAny of:
  PartitionedStrategy
  IncrementalStrategy
  string ("view", "table")
NoTransform strategy - incremental, partitioned, or view/table.
pysparkNoPySpark transform function to execute for transforming the data.

PythonTransform

Python transforms execute Python code to transform data.

PropertyDefaultTypeRequiredDescription
dependenciesarray[None]
NoList of dependencies that must complete before this component runs.
event_timestring
NoTimestamp column in the component output used to represent event time.
microbatchboolean
NoWhether to process data in microbatches.
batch_sizestring
NoThe size/time granularity of the microbatch to process.
lookback1integerNoThe number of time intervals prior to the current interval (and inclusive of current interval) to process in time-series processing mode.
beginstring
NoThe 'beginning of time' for this component. If provided, time intervals before this time will be skipped in a time-series run.
inputsarray[None]
NoList of input components to use as data sources for the transform.
strategyAny of:
  PartitionedStrategy
  IncrementalStrategy
  string ("view", "table")
NoTransform strategy - incremental, partitioned, or view/table.
pythonNoPython transform function to execute for transforming the data.

SnowparkTransform

Snowpark transforms execute Python code to transform data within the Snowflake platform.

PropertyDefaultTypeRequiredDescription
dependenciesarray[None]
NoList of dependencies that must complete before this component runs.
event_timestring
NoTimestamp column in the component output used to represent event time.
microbatchboolean
NoWhether to process data in microbatches.
batch_sizestring
NoThe size/time granularity of the microbatch to process.
lookback1integerNoThe number of time intervals prior to the current interval (and inclusive of current interval) to process in time-series processing mode.
beginstring
NoThe 'beginning of time' for this component. If provided, time intervals before this time will be skipped in a time-series run.
inputsarray[None]
NoList of input components to use as data sources for the transform.
strategyAny of:
  PartitionedStrategy
  IncrementalStrategy
  string ("view", "table")
NoTransform strategy - incremental, partitioned, or view/table.
snowparkNoSnowpark transform function to execute for transforming the data.

SqlTransform

SQL transforms execute SQL queries to transform data.

PropertyDefaultTypeRequiredDescription
dependenciesarray[None]
NoList of dependencies that must complete before this component runs.
event_timestring
NoTimestamp column in the component output used to represent event time.
microbatchboolean
NoWhether to process data in microbatches.
batch_sizestring
NoThe size/time granularity of the microbatch to process.
lookback1integerNoThe number of time intervals prior to the current interval (and inclusive of current interval) to process in time-series processing mode.
beginstring
NoThe 'beginning of time' for this component. If provided, time intervals before this time will be skipped in a time-series run.
inputsarray[None]
NoList of input components to use as data sources for the transform.
strategyAny of:
  PartitionedStrategy
  IncrementalStrategy
  string ("view", "table")
NoTransform strategy - incremental, partitioned, or view/table.
sqlstring
NoSQL query to execute for transforming the data.
dialectspark
NoSQL dialect to use for the query. Set to 'None' for the data plane's default dialect, or 'spark' for Spark SQL.

SCDType2Strategy

The SCD Type 2 strategy allows users to track changes to records over time, by tracking the start and end times for each version of a record. A brief overview of the strategy can be found at https://en.wikipedia.org/wiki/Slowly_changing_dimension#Type_2:_add_new_row.

PropertyDefaultTypeRequiredDescription
scd_type_2NoOptions for SCD Type 2 strategy.

MergeStrategy

A strategy that involves merging new data with existing data by updating existing records that match the unique key.

PropertyDefaultTypeRequiredDescription
mergeNoOptions for merge strategy.

KeyOptions

Column options needed for merge and SCD Type 2 strategies, such as unique key and deletion column name.

PropertyDefaultTypeRequiredDescription
unique_keystringYesColumn or comma-separated set of columns used as a unique identifier for records, aiding in the merge process.
deletion_columnstring
NoColumn name used in the upstream source for soft-deleting records. Used when replicating data from a source that supports soft-deletion. If provided, the merge strategy will be able to detect deletions and mark them as deleted in the destination. If not provided, the merge strategy will not be able to detect deletions.
merge_update_columnsAny of:
  string
  array[string]
NoList of columns to include when updating values in merge. These columns are mutually exclusive with respect to the columns in merge_exclude_columns.
merge_exclude_columnsAny of:
  string
  array[string]
NoList of columns to exclude when updating values in merge. These columns are mutually exclusive with respect to the columns in merge_update_columns.
incremental_predicatesAny of:
  string
  array[string]
NoList of conditions to filter incremental data.