Google BigQuery
  • Dark
    Light

Google BigQuery

  • Dark
    Light

This article is specific to the following platforms - Snowflake - Redshift - Synapse - Delta Lake.

Google BigQuery

This component uses the Google BigQuery API to retrieve data and load it into a table. This stages the data, so the table is reloaded each time. You may then use transformations to enrich and manage the data in permanent tables.

Note: By default, the QueryPassthrough Connection Option is true on this component. Thus, advanced SQL queries written by the user are passed through to BigQuery as-is.

Warning: This component is potentially destructive. If the target table undergoes a change in structure, it will be recreated. Otherwise, the target table is truncated. Setting the Load Option 'Recreate Target Table' to 'Off' will prevent both recreation and truncation. Do not modify the target table structure manually.


Properties

Snowflake Properties

Property Setting Description
Name String A human-readable name for the component.
Basic/Advanced Mode Select Basic: This mode will build a Google BigQuery query for you using settings from the Data Source, Data Selection, and Data Source Filter parameters. In most cases, this will be sufficient.
Advanced: This mode will require you to write an SQL-like query, which is translated into one or more Google BigQuery API calls. The available fields and their descriptions are documented in the Google BigQuery data model.
There are some special pseudo columns that can form part of a query filter, but are not returned as data. This is fully described in the data model.
Note: while the query is exposed in an SQL-like language, the exact semantics can be surprising, for example, filtering on a column can return more data than not filtering on it. This is an impossible scenario with regular SQL.
Authentication Select Select an OAuth entry to authenticate this component. An OAuth entry must be set up in advance. To learn how to create and authorize an OAuth entry, read Google Query Authentication Guide.
Project ID String The Google cloud project ID. Read Projects for more information.
Dataset ID String The Google BigQuery dataset ID. Read Introduction to datasets for more information.
Connection Options Parameter A JDBC parameter supported by the Database Driver. The available parameters are explained in the Data Model.
They are usually not required, since sensible defaults are assumed.
Value A value for the given Parameter.
Data Source Select Select a data source, for example Likes. If no options are returned, check your OAuth setup, Project ID, and Dataset ID carefully.
Data Selection Choice Select one or more columns to return from the query.
Data Source Filter Input Column The available input columns vary depending upon the Data Source.
Qualifier Is: Compares the column to the value using the comparator.
Not: Reverses the effect of the comparison, so "equals" becomes "not equals", "less than" becomes "greater than or equal to", etc.
Comparator Choose a method of comparing the column to the value. Possible comparators include: "Equal To", "Greater than", "Less than", "Greater than or equal to", "Less than or equal to", "Like", and "Null".
"Equal to" can match exact strings and numeric values, while other comparators such as "Greater than" will work only with numerics. The "Like" operator allows the wildcard character (%) to be used at the start and end of a string value to match a column. The Null operator matches only Null values, ignoring whatever the value is set to.
Not all data sources support all comparators, thus it is likely only a subset of the above comparators will be available to choose from.
Value The value to be compared.
SQL Query Text This is an SQL-like query, written according to the Google BigQuery Data Model.
Combine Filters Select Select whether to use the defined filters in combination with one another according to either And or Or.
Limit Number Fetching a large number of results from Google BigQuery will use multiple API calls. These calls are rate-limited by the provider, so fetching a very large number may result in errors.
Type Select Choose between using a standard table or an external table.
External: The data will be put into an S3 bucket and referenced by an external table.
Standard: The data will be staged on an S3 bucket before being loaded into a table. This is the default setting.
Primary Keys Select Select one or more columns to be designated as the table's primary key.
Warehouse Select Choose a Snowflake warehouse that will run the load.
Database Select Choose a database to create the new table in.
Schema Select Select the table schema. The special value, [Environment Default] will use the schema defined in the environment. For more information on using multiple schemas, see this article.
Target Table String Provide a new table name.
Warning: This table will be recreated on each run of the job, and drop any existing table of the same name.
Stage Select Select a managed stage. The special value, [Custom], will create a stage "on the fly" for use solely within this component. Selecting [Custom] provides all the properties typically seen in the Manage Stages dialog for your input.
If you select a managed stage that has already been configured in Manage Stages, the additional properties are not provided, as they have already been configured.
Manage Stages can be found by clicking the Environments panel in the lower-left, then right-clicking an environment. To learn more, read Manage Stages.
Stage Platform Select Select a staging setting.
Snowflake Managed: Allow Matillion ETL to create and use a temporary internal stage on Snowflake for staging the data. This stage, along with the staged data, will cease to exist after loading is complete.
(AWS only) Existing Amazon S3 Location: Activates the S3 Staging Area property, allowing users to specify a custom staging area on Amazon S3. The Stage Authentication property is also activated, letting users select a method of authenticating the data staging.
(Azure only) Existing Azure Blob Storage Location: Activates the Storage Account and Blob Container properties, allowing users to specify a custom staging location on Azure. The Stage Authentication property is also activated, letting users select a method of authenticating the data staging.
(GCP only) Existing Google Cloud Storage Location: Activates the GCS Staging Area property, allowing users to specify a custom staging area within Google Cloud Storage.
Stage Authentication Select (AWS and Azure only) Select an authentication method for data staging.
Credentials: Uses the credentials configured in the Matillion ETL environment. If no credentials have been configured, an error will occur.
Storage Integration: Use a Snowflake storage integration to authentication data staging. A storage integration is a Snowflake object that stores a generated identity and access management (IAM) entity for your external cloud storage, along with an optional set of allowed or blocked storage locations. To learn more, read Create Storage Integration.
Storage Integration Select Select a Snowflake storage integration from the dropdown list. Storage integrations are required to permit Snowflake to read data from and write to your cloud storage location (Amazon S3, Microsoft Azure, Google Cloud Storage) and must be set up in advance of selection.
To learn more about setting up a storage integration for use in Matillion ETL, read Storage Integration Setup Guide.
This property is only available when Stage Authentication is set to Storage Integration.
S3 Staging Area Select (AWS only) Select an S3 bucket for temporary storage. Ensure your access credentials have S3 access and permission to write to the bucket. Read Manage Credentials for details on setting up access. The temporary objects created in this bucket will be removed again after the load completes, they are not kept.
Use Accelerated Endpoint Boolean When True, data will be loaded via the s3-accelerate endpoint. Please consider the following information:
  • Enabling acceleration can enhance the speed at which data is transferred to the chosen S3 bucket. However, enhanced speed is not always guaranteed. Please consult Amazon S3 Transfer Acceleration Speed Comparison to compare S3 Direct versus S3 Accelerated Transfer speeds.
  • Users must manually set the acceleration configuration of an existing bucket. To learn more, see PutBucketAccelerateConfiguration in the API Reference, available at the AWS documentation.
  • This property is only available if the selected S3 bucket has Amazon S3 Transfer Acceleration enabled. For more information, including how to enable this feature, read Getting started with Amazon S3 Transfer Acceleration.
  • Cases may arise where Matillion ETL cannot determine whether the chosen S3 bucket has Amazon S3 Transfer Acceleration enabled. In these cases, Matillion ETL will reveal this property for user input on a "just in case" basis. In these cases, Matillion ETL may return a validation message that reads OK - Bucket could not be validated. You may also encounter cases where, if you do not have permission to get the status of the acceleration configuration (namely, the permission, GetAccelerateConfiguration) Matillion ETL will again show this property "just in case".
  • The default setting is False.
Storage Account Select (Azure Only) Select a storage account with your desired blob container to be used for staging the data. For more information, read Storage account overview.
Blob Container Select (Azure Only) Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.
GCS Staging Area Select (GCP only) The URL and path of the target Google Storage bucket to be used for staging the queried data. For more information, read Creating storage buckets.
Encryption Select (AWS Only) Decide how the files are encrypted inside the S3 bucket. This property is available when using an existing Amazon S3 location for staging.This property is available when using an Existing Amazon S3 Location for Staging.
None: No encryption.
SSE KMS: Encrypt the data according to a key stored on KMS. Read AWS Key Management Service (AWS KMS) to learn more.
SSE S3: Encrypt the data according to a key stored on an S3 bucket. Read Using server-side encryption with Amazon S3-managed encryption keys (SSE-S3) to learn more..
KMS Key ID Select (AWS Only) The ID of the KMS encryption key you have chosen to use in the 'Encryption' property.
Load Options Multiple Select Clean Staged Files: Destroy staged files after loading data. Default is On.
String Null is Null: Converts any strings equal to "null" into a null value. This is case-sensitive and only works with entirely lower-case strings. Default is Off.
Recreate Target Table: Choose whether the component recreates its target table before the data load. If Off, the component will use an existing table or create one if it does not exist. Default is On.
File Prefix: Give staged file names a prefix of your choice. The default setting is an empty field.
Trim String Columns: Remove leading and trailing characters from a string column. Default is On
Compression Type: Set the compression type to either gzip or None. The default is gzip.
Use Grid Variable: Check this checkbox to use a grid variable. This box is unchecked by default.
New Table Name String Specify the name of the new table to be created.
This property is only available when Type is set to External.
Stage Database Select (Specify the stage database. The special value, [Environment Default], will use the database defined in the environment.
This property is only available when Type is set to External.
Stage Schema Select Specify the stage schema. The special value, [Environment Default], will use the schema defined in the environment.
This property is only available when Type is set to External.
Stage Select Select a stage.
This property is only available when Type is set to External.
Auto Debug Select Choose whether to automatically log debug information about your load. These logs can be found in the Task History and should be included in support requests concerning the component. Turning this on will override any debugging Connection Options.
Debug Level Select The level of verbosity with which your debug information is logged. Levels above 1 can log huge amounts of data and result in slower execution.
1: Will log the query, the number of rows returned by it, the start of execution and the time taken, and any errors.
2: Will log everything included in Level 1, cache queries, and additional information about the request, if applicable.
3: Will additionally log the body of the request and the response.
4: Will additionally log transport-level communication with the data source. This includes SSL negotiation.
5: Will additionally log communication with the data source and additional details that may be helpful in troubleshooting problems. This includes interface commands.

Redshift Properties

Property Setting Description
Name String A human-readable name for the component.
Basic/Advanced Mode Select Basic: This mode will build a Google BigQuery query for you using settings from the Data Source, Data Selection, and Data Source Filter parameters. In most cases, this will be sufficient.
Advanced: This mode will require you to write an SQL-like query, which is translated into one or more Google BigQuery API calls. The available fields and their descriptions are documented in the Google BigQuery data model.
There are some special pseudo columns that can form part of a query filter, but are not returned as data. This is fully described in the data model.
Note: while the query is exposed in an SQL-like language, the exact semantics can be surprising, for example, filtering on a column can return more data than not filtering on it. This is an impossible scenario with regular SQL.
Authentication Select Select an OAuth entry to authenticate this component. An OAuth entry must be set up in advance. To learn how to create and authorize an OAuth entry, read Google Query Authentication Guide.
Project ID String The Google cloud project ID. Read Projects for more information.
Dataset ID String The Google BigQuery dataset ID. Read Introduction to datasets for more information.
Connection Options Parameter A JDBC parameter supported by the database driver. The available parameters are explained in the Google BigQuery data model.
Manual setup is not usually required, since sensible defaults are assumed.
Value A value for the given Parameter.
SQL Query String Input an SQL-like query, written according to the Google BigQuery data model.
This property is only available when Basic/Advanced Mode is set to Advanced.
Data Source Select Select a data source.
Data Selection Dual Listbox Select one or more columns from the chosen data source to return from the query.
Data Source Filter Input Column Select an input column for your filter. The available input columns vary depending upon the data source.
Qualifier Is: Compares the column to the value using the comparator.
Not: Reverses the effect of the comparison, so "Equals" becomes "Not equals", "Less than" becomes "Greater than or equal to", etc.
Comparator Select the comparator. Note: Not all comparators will work with all possible data sources.
Choose one of "Equal to", "Greater than", "Less than", "Greater than or equal to", "Less than or equal to", or "Like".
"Equal to" can match exact strings and numeric values, while other comparators such as "Greater than" and "Less than" will work only with numerics. The "Like" comparator allows the wildcard character % to be used at the start and end of a string value to match a column. The "Null" comparator matches only null values, ignoring whatever the value is set to.
Note: Not all data sources support all comparators, meaning that, often, only a subset of the above comparators will be available for selection.
Value Specify value to be compared.
Combine Filters Select Use the defined filters in combination with one another according to either And or Or.
Limit Integer Set a numeric value to limit the number of rows that can be loaded. Fetching a large number of results will use multiple API calls. These calls are rate-limited by the provider, so fetching a very large number may result in errors.
Type Select Choose between using a standard table or an external table.
Standard: The data will be staged on an S3 bucket before being loaded into a table.
External: The data will be put into an S3 bucket and referenced by an external table.
Schema Select Select the table schema. The special value, [Environment Default], will use the schema defined in the environment. For more information on using multiple schemas, read Schemas.
Note: An external schema is required if the Type property is set to "External".
Target Table String Provide a new table name.
Warning: This table will be recreated and will drop any existing table of the same name.
Location S3 Bucket Select an S3 bucket path that will be used to store the data. Once the data is on an S3 bucket, it can be referenced by an external table.
This property is only available when the Type property is set to "External".
S3 Staging Area Select Select an S3 bucket for temporary storage. Ensure your access credentials have S3 access and permission to write to the bucket. Read Manage Credentials for details on setting up access. The temporary objects created in this bucket will be removed again after the load completes, they are not kept.
Use Accelerated Endpoint Boolean When True, data will be loaded via the s3-accelerate endpoint. Please consider the following information:
  • Enabling acceleration can enhance the speed at which data is transferred to the chosen S3 bucket. However, enhanced speed is not always guaranteed. Please consult Amazon S3 Transfer Acceleration Speed Comparison to compare S3 Direct versus S3 Accelerated Transfer speeds.
  • Users must manually set the acceleration configuration of an existing bucket. To learn more, see PutBucketAccelerateConfiguration in the API Reference, available at the AWS documentation.
  • This property is only available if the selected S3 bucket has Amazon S3 Transfer Acceleration enabled. For more information, including how to enable this feature, read Getting started with Amazon S3 Transfer Acceleration.
  • Cases may arise where Matillion ETL cannot determine whether the chosen S3 bucket has Amazon S3 Transfer Acceleration enabled. In these cases, Matillion ETL will reveal this property for user input on a "just in case" basis. In these cases, Matillion ETL may return a validation message that reads OK - Bucket could not be validated. You may also encounter cases where, if you do not have permission to get the status of the acceleration configuration (namely, the permission, GetAccelerateConfiguration) Matillion ETL will again show this property "just in case".
  • The default setting is False.
Distribution Style Select All: Copy rows to all nodes in the Redshift cluster.
Auto: (Default) Allow Redshift to manage your distribution style.
Even: Distribute rows around the Redshift cluster evenly.
Key: Distribute rows around the Redshift cluster according to the value of a key column.
Table distribution is critical to good performance. See the Distribution styles documentation for more information.
Sort Key Multiple Select This is optional, and lets users specify one or more columns from the input that should be set as the table's sort key.
Sort keys are critical to good performance. Read Working with sort keys for more information.
Sort Key Options Select Decide whether the sort key is of a compound or interleaved variety. Read Working with sort keys for more information.
Primary Key Multiple Select Select one or more columns to be designated as the table's primary key.
Load Options Multiple Select Columns Comp Update: Apply automatic compression to the target table. Default is On.
Stat Update: Automatically update statistics when filling a table. Default is On. In this case, it is updating the statistics of the target table.
Clean S3 Objects: Automatically remove UUID-based objects on the S3 bucket. Default is On. Effectively, users decide here whether to keep the staged data in the S3 bucket or not.
String Null is Null: Converts any strings equal to "null" into a null value. This is case-sensitive and only works with entirely lower-case strings. Default is On.
Recreate Target Table: Choose whether the component recreates its target table before the data load. If Off, the existing table will be used. Default is On.
File Prefix: Give staged file names a prefix of your choice. When this Load Option is selected, users should set their preferred prefix in the text field.
Compression Type: Set the compression type to either gzip or None. The default is gzip.
Use Grid Variable: Check this checkbox to use a grid variable. This box is unchecked by default.
Encryption Select Decide how the files are encrypted inside the S3 bucket. This property is available when using an existing Amazon S3 location for staging.
None: No encryption.
SSE KMS: Encrypt the data according to a key stored on KMS. Read AWS Key Management Service (AWS KMS) to learn more.
SSE S3: Encrypt the data according to a key stored on an S3 bucket. Read Using server-side encryption with Amazon S3-managed encryption keys (SSE-S3) to learn more.
KMS Key ID Select The ID of the KMS encryption key you have chosen to use in the Encryption property.
Auto Debug Select Choose whether to automatically log debug information about your load. These logs can be found in the Task History and should be included in support requests concerning the component. Turning this on will override any debugging Connection Options.
Debug Level Select The level of verbosity with which your debug information is logged. Levels above 1 can log huge amounts of data and result in slower execution.
1: Will log the query, the number of rows returned by it, the start of execution and the time taken, and any errors.
2: Will log everything included in Level 1, plus cache queries and additional information about the request, if applicable.
3: Will additionally log the body of the request and the response.
4: Will additionally log transport-level communication with the data source. This includes SSL negotiation.
5: Will additionally log communication with the data source, as well as additional details that may be helpful in troubleshooting problems. This includes interface commands.

Synapse Properties

Property Setting Description
Name String A human-readable name for the component.
Basic/Advanced Mode Select Basic: This mode will build a Google BigQuery query for you using settings from the Data Source, Data Selection, and Data Source Filter parameters. In most cases, this will be sufficient.
Advanced: This mode will require you to write an SQL-like query, which is translated into one or more Google BigQuery API calls. The available fields and their descriptions are documented in the Google BigQuery data model.
There are some special pseudo columns that can form part of a query filter, but are not returned as data. This is fully described in the data model.
Note: while the query is exposed in an SQL-like language, the exact semantics can be surprising, for example, filtering on a column can return more data than not filtering on it. This is an impossible scenario with regular SQL.
Authentication Select Select an OAuth entry to authenticate this component. An OAuth entry must be set up in advance. To learn how to create and authorize an OAuth entry, read Google Query Authentication Guide.
Project ID String The Google cloud project ID. Read Projects for more information.
Dataset ID String The Google BigQuery dataset ID. Read Introduction to datasets for more information.
Connection Options Parameter A JDBC parameter supported by the Database Driver. The available parameters are explained in the Data Model.
They are usually not required, since sensible defaults are assumed.
Value A value for the given Parameter.
Data Source Select Select a data source, for example Likes. If no options are returned, check your OAuth setup, Project ID, and Dataset ID carefully.
Data Selection Choice Select one or more columns to return from the query.
Data Source Filter Input Column The available input columns vary depending upon the Data Source.
Qualifier Is: Compares the column to the value using the comparator.
Not: Reverses the effect of the comparison, so "equals" becomes "not equals", "less than" becomes "greater than or equal to", etc.
Comparator Choose a method of comparing the column to the value. Possible comparators include: "Equal To", "Greater than", "Less than", "Greater than or equal to", "Less than or equal to", "Like", and "Null".
"Equal to" can match exact strings and numeric values, while other comparators such as "Greater than" will work only with numerics. The "Like" operator allows the wildcard character (%) to be used at the start and end of a string value to match a column. The Null operator matches only Null values, ignoring whatever the value is set to.
Not all data sources support all comparators, thus it is likely only a subset of the above comparators will be available to choose from.
Value The value to be compared.
SQL Query Text This is an SQL-like query, written according to the Google BigQuery Data Model.
Combine Filters Select Select whether to use the defined filters in combination with one another according to either And or Or.
Limit Number Fetching a large number of results from Google BigQuery will use multiple API calls. These calls are rate-limited by the provider, so fetching a very large number may result in errors.
Auto Debug Select Choose whether to automatically log debug information about your load. These logs can be found in the Task History and should be included in support requests concerning the component. Turning this on will override any debugging Connection Options.
Debug Level Select The level of verbosity with which your debug information is logged. Levels above 1 can log huge amounts of data and result in slower execution.
1: Will log the query, the number of rows returned by it, the start of execution and the time taken, and any errors.
2: Will log everything included in Level 1, cache queries, and additional information about the request, if applicable.
3: Will additionally log the body of the request and the response.
4: Will additionally log transport-level communication with the data source. This includes SSL negotiation.
5: Will additionally log communication with the data source and additional details that may be helpful in troubleshooting problems. This includes interface commands.

Delta Lake Properties

Name String A human-readable name for the component.
Basic/Advanced Mode Select Basic: This mode will build a Google BigQuery query for you using settings from the Data Source, Data Selection, and Data Source Filter parameters. In most cases, this will be sufficient.
Advanced: This mode will require you to write an SQL-like query, which is translated into one or more Google BigQuery API calls. The available fields and their descriptions are documented in the Google BigQuery data model.
There are some special pseudo columns that can form part of a query filter, but are not returned as data. This is fully described in the data model.
Note: while the query is exposed in an SQL-like language, the exact semantics can be surprising, for example, filtering on a column can return more data than not filtering on it. This is an impossible scenario with regular SQL.
Authentication Select Select an OAuth entry to authenticate this component. An OAuth entry must be set up in advance. To learn how to create and authorize an OAuth entry, read Google Query Authentication Guide.
Project ID String The Google cloud project ID. Read Projects for more information.
Dataset ID String The Google BigQuery dataset ID. Read Introduction to datasets for more information.
Connection Options Parameter A JDBC parameter supported by the database driver. The available parameters are explained in the Google BigQuery data model.
Manual setup is not usually required, since sensible defaults are assumed.
Value A value for the given Parameter.
SQL Query String Input an SQL-like query, written according to the Google BigQuery data model.
This property is only available when Basic/Advanced Mode is set to Advanced.
Data Source Select Select a data source.
Data Selection Dual Listbox Select one or more columns from the chosen data source to return from the query.
Data Source Filter Input Column Select an input column for your filter. The available input columns vary depending upon the data source.
Qualifier Is: Compares the column to the value using the comparator.
Not: Reverses the effect of the comparison, so "Equals" becomes "Not equals", "Less than" becomes "Greater than or equal to", etc.
Comparator Select the comparator. Note: Not all comparators will work with all possible data sources.
Choose one of "Equal to", "Greater than", "Less than", "Greater than or equal to", "Less than or equal to", or "Like".
"Equal to" can match exact strings and numeric values, while other comparators such as "Greater than" and "Less than" will work only with numerics. The "Like" comparator allows the wildcard character % to be used at the start and end of a string value to match a column. The "Null" comparator matches only null values, ignoring whatever the value is set to.
Note: Not all data sources support all comparators, meaning that, often, only a subset of the above comparators will be available for selection.
Value Specify value to be compared.
Combine Filters Select Use the defined filters in combination with one another according to either And or Or.
Limit Integer Set a numeric value to limit the number of rows that can be loaded. Fetching a large number of results will use multiple API calls. These calls are rate-limited by the provider, so fetching a very large number may result in errors.
Catalog Select Select a Databricks Unity Catalog. The special value, [Environment Default], will use the catalog specified in the Matillion ETL environment setup. Selecting a catalog will determine which databases are available in the next parameter.
Database Select Select the Delta Lake database. The special value, [Environment Default], will use the database specified in the Matillion ETL environment setup.
Table String Specify the new table name.
S3 Staging Area Select (AWS only) Select an S3 bucket for staging.
Storage Account Select (Azure only) Select an Azure Blob Storage account. An Azure storage account contains all of your Azure Storage data objects: blobs, files, queues, tables, and disks. For more information, read Storage account overview.
Blob Container Select (Azure only) A Blob Storage location. The available blob containers will depend on the selected storage account.
Encryption Select (AWS only) Specify how files are encrypted inside the S3 Bucket.
None: No encryption.
SSE KMS: Encrypt the data according to a key stored on KMS. Read AWS Key Management Service (AWS KMS) to learn more.
SSE S3: Encrypt the data according to a key stored on an S3 bucket. Read Using server-side encryption with Amazon S3-managed encryption keys (SSE-S3) to learn more.
KMS Key ID Select (AWS only) The ID of the KMS encryption key you have chosen to use in the Encryption property.
Load Options Multiple Select Clean Staged Files: Destroy staged files after loading data. The default is On.
String Null is Null: Converts any strings equal to null into a null value. This is case-sensitive and only works with entirely lower-case strings. The default is Off.
Recreate Target Table Choose whether the component recreates its target table before the data load. If Off, the component will use an existing table or create one if it does not exist. The default is On.
File Prefix: Give staged file names a prefix of your choice. The default is an empty field (no prefix).
Compression Type: Set the compression type to either gzip or None. The default is gzip.
Use Grid Variable: Check this checkbox to use a grid variable. This box is unchecked by default.
Auto Debug Select Choose whether to automatically log debug information about your load or not. These logs can be found in the Task History and should be included in support requests concerning the component. Turning this on will override any debugging Connection Options.
Debug Level Select The level of verbosity with which your debug information is logged. Levels above 1 can log huge amounts of data and result in slower execution.
1: Will log the query, the number of rows returned by it, the start of execution and the time taken, and any errors.
2: Will log everything included in Level 1, plus cache queries and additional information about the request, if applicable.
3: Will additionally log the body of the request and the response.
4: Will additionally log transport-level communication with the data source. This includes SSL negotiation.
5: Will additionally log communication with the data source, as well as additional details that may be helpful in troubleshooting problems. This includes interface commands.

Variable Exports

This component makes the following values available to export into variables:

Source Description
Time Taken To Stage The amount of time (in seconds) taken to fetch the data from the data source and upload it to S3.
Time Taken To Load The amount of time (in seconds) taken to execute a COPY statement to load the data into the target table from S3.

Strategy

Connect to the target system and issue the API calls. Stream the results into objects on cloud storage. Then, create or truncate the target table and issue a COPY command to load the cloud storage objects into the table. Finally, clean up the temporary cloud storage objects.


Video