Postgres Read Component
A component that reads data from a Postgresql table.
Examples
- postgres_read_component_config.yaml
- postgres_merge_materialization.yaml
- postgres_read_incremental.yaml
component:
read:
connection: my-postgres-connection
postgres:
table:
name: my_table
schema: my_schema
component:
read:
postgres:
table:
name: my_table
schema: public
connection: my-postgres-connection
strategy:
incremental:
merge:
unique_key: id # Column or set of columns used as a unique identifier for records.
deletion_column: deleted_at # Column name used for soft-deleting records, if applicable.
on_schema_change: append_new_columns
component:
read:
connection: my-postgres-connection
strategy:
replication:
incremental:
column_name: updated_at # Name of the column to use for tracking incremental updates to the data.
incremental: append # Specifies that new data should be appended incrementally.
postgres:
tables:
- name: table1
schema: public
- name: table2
schema: public
PostgresReadComponent
PostgresReadComponent
is defined beneath the following ancestor nodes in the YAML structure:
Below are the properties for the PostgresReadComponent
. Each property links to the specific details section further down in this page.
Property | Default | Type | Required | Description |
---|---|---|---|---|
dependencies | array[None] | No | List of dependencies that must complete before this component runs. | |
event_time | string | No | Timestamp column in the component output used to represent event time. | |
connection | string | No | The name of the connection to use for reading data. | |
columns | array[None] | No | A list specifying the columns to read from the source and transformations to make during read. | |
normalize | boolean | No | A boolean flag indicating if the output column names should be normalized to a standard naming convention after reading. | |
preserve_case | boolean | No | A boolean flag indicating if the case of the column names should be preserved after reading. | |
uppercase | boolean | No | A boolean flag indicating if the column names should be transformed to uppercase after reading. | |
strategy | Any of: string IncrementalReadStrategy PartitionedStrategy | No | Ingest strategy options. | |
read_options | No | Options for reading from the database or warehouse. | ||
use_duckdb | boolean | No | Use DuckDB extension for reading data, which is faster but may have memory limitations with very large tables. Defaults to False | |
postgres | Postgres | Any of: | No | Postgres read options. |
Property Details
Component
A component is a fundamental building block of a data flow. Types of components that are supported include: read, transform, task, test, and more.
Property | Default | Type | Required | Description |
---|---|---|---|---|
component | One of: CustomPythonReadComponent ApplicationComponent AliasedTableComponent ExternalTableComponent | Yes | Configuration options for the component. |
ReadComponent
A component that reads data from a data system.
Property | Default | Type | Required | Description |
---|---|---|---|---|
data_plane | One of: SnowflakeDataPlane BigQueryDataPlane DatabricksDataPlane | No | Data Plane-specific configuration options for a component. | |
skip | boolean | No | A boolean flag indicating whether to skip processing for the component or not. | |
retry_strategy | No | The retry strategy configuration options for the component if any exceptions are encountered. | ||
description | string | No | A brief description of what the model does. | |
metadata | No | Meta information of a resource. In most cases it doesn't affect the system behavior but may be helpful to analyze project resources. | ||
name | string | Yes | The name of the model | |
flow_name | string | No | The name of the flow that the component belongs to. | |
data_maintenance | No | The data maintenance configuration options for the component. | ||
tests | No | Defines tests to run on the data of this component. | ||
read | One of: GenericFileReadComponent LocalFileReadComponent SFTPReadComponent S3ReadComponent GcsReadComponent AbfsReadComponent HttpReadComponent MSSQLReadComponent MySQLReadComponent OracleReadComponent PostgresReadComponent SnowflakeReadComponent BigQueryReadComponent DatabricksReadComponent | Yes | The read component that reads data from a data system. |
ComponentColumn
Component column expression definition.
No properties defined.
DatabaseReadOptions
Options for reading from a database or warehouse.
Property | Default | Type | Required | Description |
---|---|---|---|---|
chunk_size | 100000 | integer | No | Number of rows to read from the table at a time. |
parallel_read | No | Options for reading from the source in parallel. |
IncrementalReadStrategy
Incremental Read Strategy for database read components - this is a combination of the replication strategy that defines how new data is read from the source, and the incremental strategy that defines how this new data is materialized in the output.
Property | Default | Type | Required | Description |
---|---|---|---|---|
replication | One of: Any of: string Any of: string | No | Replication strategy to use for data synchronization. | |
incremental | Any of: string MergeStrategy SCDType2Strategy | Yes | Incremental processing strategy. | |
on_schema_change | string ("ignore", "fail", "append_new_columns", "sync_all_columns") | No | Policy to apply when schema changes are detected. Defaults to 'fail' if not provided. |
CdcReplication
Specifies if Change Data Capture (CDC) is the replication strategy.
Property | Default | Type | Required | Description |
---|---|---|---|---|
cdc | No | Resource for Change Data Capture (CDC), enabling incremental data capture based on changes. |
CdcOptions
No properties defined.
IncrementalReplication
Specifies if incremental data reading is the replication strategy.
Property | Default | Type | Required | Description |
---|---|---|---|---|
incremental | No | Resource for incremental data reading based on a specific column. |
IncrementalColumn
Specifies the column to be used for incremental reading.
Property | Default | Type | Required | Description |
---|---|---|---|---|
column_name | string | Yes | Name of the column to use for tracking incremental updates to the data. | |
start_value | Any of: string integer number string | No | Initial value to start reading data from the specified column. |
MultipleQueries
Options to define one or more arbitrary select statements. The output of the queries will be unioned together, and must return the same database schema.
Property | Default | Type | Required | Description |
---|---|---|---|---|
queries | array[string] | No | List of SQL queries to execute for reading data. |
MultipleTablesWithSchema
Options for reading from multiple tables in a specific schema.
Property | Default | Type | Required | Description |
---|---|---|---|---|
tables | array[None] | Yes | List of tables (in specified schemas) to read data from. |
ParallelReadOptions
Options for reading from a source in parallel. Ascend will logically partition the source data based on the partition column and max partitions, and then read each partition in parallel. If lower and upper bounds for the partition column are provided, they are used as hints to guide partitioning by dividing the range into max_partitions
roughly equal partitions.
Property | Default | Type | Required | Description |
---|---|---|---|---|
partition_column | string | Yes | The name of the column to partition the data by. Select a column that could be used to partition the data into smaller chunks and that results in the lowest skew between partitions in terms of record count. This column must either be an integer or timestamp column. | |
max_partitions | -1 | integer | No | The maximum number of partitions to read concurrently from the source. Since this translates directly to the number of concurrent connections to the source, care should be taken to select a value that does not exceed the source system's connection or other resource limits. A value of -1 means that the value is chosen automatically based on the source. |
partition_lower_bound | Any of: integer |