MySQL Read Component
A component that reads data from a MySQL database, options include ingesting a single table / query, or multiple tables / queries.
Examples
- mysql_read_employees.yaml
- mysql_read_component_multiple_tables.yaml
- mysql_read_incremental_materialization_with_comments.yaml
component:
read:
connection: my-mysql-connection
mysql:
table:
name: employees
component:
read:
connection: my-mysql-connection
mysql:
tables:
- name: table1
- name: table2
- name: table3
# MySQLReadComponent configuration with incremental replication and materialization
component:
read:
# Connection to the MySQL database
connection: my-mysql-connection
mysql:
# Table to read data from
table:
name: my_table
strategy:
replication:
incremental:
column_name: my_incremental_column # Column used for tracking updates
start_value: 0 # Initial start value for incremental read
incremental:
merge:
unique_key: my_unique_key # Unique key for identifying records
MySQLReadComponent
MySQLReadComponent
is defined beneath the following ancestor nodes in the YAML structure:
Below are the properties for the MySQLReadComponent
. Each property links to the specific details section further down in this page.
Property | Default | Type | Required | Description |
---|---|---|---|---|
dependencies | array[None] | No | List of dependencies that must complete before this component runs. | |
event_time | string | No | Timestamp column in the component output used to represent event time. | |
connection | string | No | The name of the connection to use for reading data. | |
columns | array[None] | No | A list specifying the columns to read from the source and transformations to make during read. | |
normalize | boolean | No | A boolean flag indicating if the output column names should be normalized to a standard naming convention after reading. | |
preserve_case | boolean | No | A boolean flag indicating if the case of the column names should be preserved after reading. | |
uppercase | boolean | No | A boolean flag indicating if the column names should be transformed to uppercase after reading. | |
strategy | Any of: full IncrementalReadStrategy PartitionedStrategy | No | Ingest strategy options. | |
read_options | No | Options for reading from the database or warehouse. | ||
use_duckdb | boolean | No | Use DuckDB extension for reading data, which is faster but may have memory limitations with very large tables. Defaults to False | |
mysql | Any of: | Yes | MySQL read options. | |
use_checksum | boolean | No | Use table checksum to detect data changes. if false or unset, will do full re-read for every run for full-sync. |
Property Details
Component
A component is a fundamental building block of a data flow. Types of components that are supported include: read, transform, task, test, and more.
Property | Default | Type | Required | Description |
---|---|---|---|---|
component | One of: CustomPythonReadComponent ApplicationComponent AliasedTableComponent ExternalTableComponent | Yes | Configuration options for the component. |
ReadComponent
A component that reads data from a data system.
Property | Default | Type | Required | Description |
---|---|---|---|---|
data_plane | One of: SnowflakeDataPlane BigQueryDataPlane DatabricksDataPlane | No | Data Plane-specific configuration options for a component. | |
skip | boolean | No | A boolean flag indicating whether to skip processing for the component or not. | |
retry_strategy | No | The retry strategy configuration options for the component if any exceptions are encountered. | ||
description | string | No | A brief description of what the model does. | |
metadata | No | Meta information of a resource. In most cases it doesn't affect the system behavior but may be helpful to analyze project resources. | ||
name | string | Yes | The name of the model | |
flow_name | string | No | The name of the flow that the component belongs to. | |
data_maintenance | No | The data maintenance configuration options for the component. | ||
tests | No | Defines tests to run on the data of this component. | ||
read | One of: GenericFileReadComponent LocalFileReadComponent SFTPReadComponent S3ReadComponent GcsReadComponent AbfsReadComponent HttpReadComponent MSSQLReadComponent MySQLReadComponent OracleReadComponent PostgresReadComponent SnowflakeReadComponent BigQueryReadComponent DatabricksReadComponent | Yes | The read component that reads data from a data system. |
MultipleTables
Options from reading from multiple tables. Useful for reading from multiple tables in a single component.
Property | Default | Type | Required | Description |
---|---|---|---|---|
tables | array[None] | Yes | List of tables to read data from. |
ComponentColumn
Component column expression definition.
No properties defined.
DatabaseReadOptions
Options for reading from a database or warehouse.
Property | Default | Type | Required | Description |
---|---|---|---|---|
chunk_size | 100000 | integer | No | Number of rows to read from the table at a time. |
parallel_read | No | Options for reading from the source in parallel. |
IncrementalReadStrategy
Incremental Read Strategy for database read components - this is a combination of the replication strategy that defines how new data is read from the source, and the incremental strategy that defines how this new data is materialized in the output.
Property | Default | Type | Required | Description |
---|---|---|---|---|
replication | One of: Any of: cdc Any of: incremental | No | Replication strategy to use for data synchronization. | |
incremental | Any of: append MergeStrategy | Yes | Incremental processing strategy. | |
on_schema_change | string ("ignore", "fail", "append_new_columns", "sync_all_columns") | No | Policy to apply when schema changes are detected. Defaults to 'fail' if not provided. |
CdcReplication
Specifies if Change Data Capture (CDC) is the replication strategy.
Property | Default | Type | Required | Description |
---|---|---|---|---|
cdc | No | Resource for Change Data Capture (CDC), enabling incremental data capture based on changes. |
CdcOptions
No properties defined.
IncrementalReplication
Specifies if incremental data reading is the replication strategy.
Property | Default | Type | Required | Description |
---|---|---|---|---|
incremental | No | Resource for incremental data reading based on a specific column. |
IncrementalColumn
Specifies the column to be used for incremental reading.
Property | Default | Type | Required | Description |
---|---|---|---|---|
column_name | string | Yes | Name of the column to use for tracking incremental updates to the data. | |
start_value | Any of: string integer number string | No | Initial value to start reading data from the specified column. |
MultipleQueries
Options to define one or more arbitrary select statements. The output of the queries will be unioned together, and must return the same database schema.
Property | Default | Type | Required | Description |
---|---|---|---|---|
queries | array[string] | No | List of SQL queries to execute for reading data. |
ParallelReadOptions
Options for reading from a source in parallel. Ascend will logically partition the source data based on the partition column and max partitions, and then read each partition in parallel. If lower and upper bounds for the partition column are provided, they are used as hints to guide partitioning by dividing the range into max_partitions
roughly equal partitions.
Property | Default | Type | Required | Description |
---|---|---|---|---|
partition_column | string | Yes | The name of the column to partition the data by. Select a column that could be used to partition the data into smaller chunks and that results in the lowest skew between partitions in terms of record count. This column must either be an integer or timestamp column. | |
max_partitions | -1 | integer | No | The maximum number of partitions to read concurrently from the source. Since this translates directly to the number of concurrent connections to the source, care should be taken to select a value that does not exceed the source system's connection or other resource limits. A value of -1 means that the value is chosen automatically based on the source. |
partition_lower_bound | Any of: integer |