-
DarkLight
Temporary Output
-
DarkLight

Rewrite Table
Write the input data flow out to a new table.
Runtime errors may occur, for example if a data value overflows the maximum allowed size of a field.
Note: The output table is overwritten each time the component is executed so do not use this component to output permanent data you do not want to overwrite.
Properties
Snowflake Properties | ||
---|---|---|
Property | Setting | Description |
Name | String | A human-readable name for the component. |
Warehouse | Select | Select the Snowflake Warehouse. The special value, [Environment Default], will use the warehouse defined in the Matillion ETL environment. For more information, read Virtual Warehouses. |
Database | Select | Select the Snowflake database. The special value, [Environment Default], will use the database defined in the Matillion ETL environment. For more information, read Databases, Tables, & Views. |
Schema | Select | Select the Snowflake schema. The special value, [Environment Default], will use the schema defined in the Matillion ETL environment. For more information, read Database, Schema, & Share DDL. |
Target Table | String | Provide a new table name. Note: Previous versions of this component prepended t_ to the target table name to help avoid clashing with existing tables; however, this is no longer the case. |
Order By | Column | Select the column(s) to sort by. |
Sort Order | Set the corresponding column to be ordered ascending or descending. The default sort order is ascending. |
Redshift Properties | ||
---|---|---|
Property | Setting | Description |
Name | String | A human-readable name for the component. |
Schema | Select | Select the table schema. The special value, [Environment Default] will use the schema defined in the environment. For more information on using multiple schemas, see this article. |
Target Table | Select | Choose a target table name.
Note: Older versions of this component prepended 't_' to the name to help avoid clashing with existing tables; however, this is no longer the case. |
Table Sort Key | Select | This is optional, and specifies the columns from the input that should be
set as the table's sort-key.
Sort-keys are critical to good performance - see the Amazon Redshift documentation for more information. |
Sort Key Options | Select | Choose the type of sort key to be used.
Compound: A compound key is made up of all of the columns listed in the sort key definition, in the order they are listed. Most useful for tables that will be queried with filters using prefixes of the sort keys. Interleaved: An interleaved sort gives equal weight to each column, or subset of columns, in the sort key. Most useful for when multiple queries use different columns for filters. |
Table Distribution Style | Select |
Even: Distribute rows around the Redshift cluster evenly.
All: Copy rows to all nodes in the Redshift cluster. Key: Distribute rows around the Redshift cluster according to the value of a key column. Table distribution is critical to good performance - see the Amazon Redshift documentation for more information. |
Table Distribution Key | Select | This is only displayed if the Table Distribution Style is set to Key. It is the column used to determine which cluster node the row is stored on. |
BigQuery Properties | ||
---|---|---|
Property | Setting | Description |
Name | String | A human-readable name for the component. |
Target Project | Text | Enter the name of the Google Cloud Platform Project that the table belongs to. |
Dataset | Text | Enter the name of the Google Cloud Platform Dataset that the table belongs to. |
Target Table | Select | Choose a target table name.
Note: Older versions of this component prepended 't_' to the name to help avoid clashing with existing tables; however, this is no longer the case. |
Synapse Properties | ||
---|---|---|
Property | Setting | Description |
Name | String | A human-readable name for the component. |
Schema | Select | Select the table schema. The special value, [Environment Default], will use the schema defined in the environment. For more information on schemas, please see the Azure Synapse documentation. |
Table | String | Specify the name of the table to create. Note: This field is case-sensitive by default, since Matillion ETL uses quoted identifiers. For more information, please refer to the Azure Synapse documentation. |
Distribution Style | Select | Select the distribution style Hash: This setting assigns each row to one distribution by hashing the value stored in the distribution_column_name. The algorithm is deterministic, meaning it always hashes the same value to the same distribution. The distribution column should be defined as NOT NULL, because all rows that have NULL are assigned to the same distribution. Replicate: This setting stores one copy of the table on each Compute node. For SQL Data Warehouse, the table is stored on a distribution database on each Compute node. For Parallel Data Warehouse, the table is stored in an SQL Server filegroup that spans the Compute node. This behavior is the default for Parallel Data Warehouse. Round Robin: Distributes the rows evenly in a round-robin fashion. This is the default behaviour. For more information, please read this article. |
Distribution Column | Select | Select the column to act as the distribution column. This property is only available when the Distribution Style property is set to "Hash". |
Partition Key | Select | Select the table's partition key. Table partitions determine how rows are grouped and stored within a distribution. For more information on table partitions, please refer to this article. |
Index Type | Select | Select the table indexing type. Options include: Clustered: A clustered index may outperform a clustered columnstore table when a single row needs to be retrieved quickly. The disadvantage to using a clustered index is that only queries that benefit are the ones that use a highly selective filter on the clustered index column. Choosing this option prompts the Index Column Grid property. Clustered Column Store: This is the default setting. Clustered columnstore tables offer both the highest level of data compression and the best overall query performance, especially for large tables. Choosing this option prompts the Index Column Order property. Heap: Users may find that using a heap table is faster for temporarily landing data in Synapse SQL pool. This is because loads to heaps are faster than to index tables, and in some cases, the subsequent read can be done from cache. When a user is loading data only to stage it before running additional transformations, loading the table to a heap table is much faster than loading the data to a clustered columnstore table. For more information, please consult the Azure Synapse documentation. |
Index Column Grid | Name | The name of each column. |
Sort | Assign a sort orientation of either ascending (Asc) or descending (Desc). | |
Index Column Order | Multiple Select | Select the columns in the order to be indexed. |
Delta Lake Properties | ||
---|---|---|
Property | Setting | Description |
Name | String | A human-readable name for the component. |
Catalog | Select | Select a Databricks Unity Catalog. The special value, [Environment Default], will use the catalog specified in the Matillion ETL environment setup. Selecting a catalog will determine which databases are available in the next parameter. |
Database | Select | Select the Delta Lake database. The special value, [Environment Default], will use the database specified in the Matillion ETL environment setup. |
Table | Select | The name of the output table. |
Partition Keys | Column Select | Select any input columns to be used as partition keys. |
Table Properties | Key | A metadata property within the table. These are expressed as key=value pairs. |
Value | The value of the corresponding row's key. | |
Comment | String | A descriptive comment of the view. |
Strategy
Drop and recreate a target table, and at runtime perform a bulk-insert from the input flow.
Example
A sum of airtime, grouped by Year and Month, is written to t_airtime_totals each time the job is run.
The output table has the sort-key set to Year and Month, and is distributed by Year.