Gmail Query

Gmail Query



Gmail Query

This component retrieves Gmail Inbox data and loads it into a table. This stages the data, so the table is reloaded each time. You may then use transformations to enrich and manage the data in permanent tables.

Note that to connect to a gmail account (particularly through the username and password method) steps may be required to enable that account:

  1. Log into your gmail account.
  2. Enable IMAP.
  3. Enable 'less secure apps'.
  4. Enable the account.


Redshift Properties

Property Setting Description
Name Text The descriptive name for the component.
Basic/Advanced Mode Choice Basic: This mode will build a Query for you using settings from Data Source, Data Selection and Data Source Filter parameters. In most cases, this will be sufficient.
Advanced: This mode will require you to write an SQL-like query. The available fields and their descriptions are documented in the Data Model.
Authentication Method Choice Select an authentication method, which must be setup in advance.
Google uses the OAuth standard for authenticating 3rd party applications. For help authenticating the Gmail Query component, please refer to our Gmail Query Authentication Guide.
Authentication Select Select the set of credentials to use to connect to Gmail. These must be set up in advance, using Project → Manage OAuth. (Only if Authentication Method is set to OAuth.)
Username Text Username, including domain, for the gmail account.
Password Text Login password for the gmail account.Users have the option to store their password inside the component but we highly recommend using the Password Manager option.
SQL Query Text Custom SQL-like query only available during 'Advanced' mode.
Connection Options Parameter A JDBC parameter supported by the Database Driver. The available parameters are explained in the Data Model
They are usually not required as sensible defaults are assumed.
Value A value for the given Parameter.
Data Source Choice Select a data source from the server.
Data Selection Choice Select one or more columns to return from the query.
Data Source Filter Input Column The available input columns vary depending upon the Data Source.
Qualifier Is - Compares the column to the value using the comparator.
Note: Not all comparators will work with all possible profiles.
Choose one of Equal To, Greater than, Less than, Greater than or equal to, Less than or equal to, Like.
Comparator Choose one of Equal To, Greater than, Less than, Greater than or equal to, Less than or equal to.
Value The value to be compared.
Combine Filters Select Select whether to use the defined filters in combination with one another according to either And or Or.
Limit Number Limits the number of rows that are loaded from file.
Primary Keys Select Select one or more columns to be designated as the table's primary key.
Type Select Choose between using a standard table or an external table.
Standard: The data will be staged on an S3 bucket before being loaded into a table.
External: The data will be put into an S3 Bucket and referenced by an external table.
Schema Select Select the table schema. The special value, [Environment Default] will use the schema defined in the environment. For more information on using multiple schemas, see this article. Note: An external schema is required if the 'Type' property is set to 'External'.
Target Table String Provide a new table name.
Warning: This table will be recreated on each run of the job, and drop any existing table of the same name.
Location S3 Bucket Select an S3 bucket path that will be used to store the data. Once the data is on an S3 bucket, it can be referenced by an external table.
This property is only available when the Type property is set to "External".
S3 Staging Area Text The name of an S3 bucket for temporary storage. Ensure your access credentials have S3 access and permission to write to the bucket. See this document for details on setting up access. The temporary objects created in this bucket will be removed again after the load completes, they are not kept.
Distribution Style Select Auto: (Default) Allow Redshift to manage your distribution style.
Even: Distributes rows around the Redshift cluster evenly.
All: Copy rows to all nodes in the Redshift cluster.
Key: Distribute rows around the Redshift cluster according to the value of a key column.
Table distribution is critical to good performance. See the Amazon Redshift documentation for more information.
Table Distribution Key Select This is only displayed if the Table Distribution Style is set to Key. It is the column used to determine which cluster node the row is stored on.
Table Sort Key Select This is optional, and specifies the columns from the input that should be set as the tables sort-key.
Sort-keys are critical to good performance - see the Amazon Redshift documentation for more information.
Sort Key Options Select Decide whether the sort key is of a compound or interleaved variety - see the Amazon Redshift documentation for more information.
Load Options Multiple Select Columns Comp Update: Apply automatic compression to the target table. Default is On.
Stat Update: Automatically update statistics when filling a table. Default is On. In this case, it is updating the statistics of the target table.
Clean S3 Objects: Automatically remove UUID-based objects on the S3 bucket. Default is On. Effectively, users decide here whether to keep the staged data in the S3 bucket or not.
String Null is Null: Converts any strings equal to "null" into a null value. This is case sensitive and only works with entirely lower-case strings. Default is On.
Recreate Target Table: Choose whether the component recreates its target table before the data load. If Off, the component will use an existing table or create one if it does not exist. Default is On.
File Prefix: Give staged file names a prefix of your choice. When this Load Option is selected, users should set their preferred prefix in the text field.
Use Grid Variable: Check this checkbox to use a grid variable. This box is unchecked by default.
Auto Debug Select Choose whether to automatically log debug information about your load. These logs can be found in the Task History and should be included in support requests concerning the component. Turning this on will override any debugging Connection Options.
Debug Level Select The level of verbosity with which your debug information is logged. Levels above 1 can log huge amounts of data and result in slower execution.
1: Will log the query, the number of rows returned by it, the start of execution and the time taken, and any errors.
2: Will log everything included in Level 1, cache queries, and additional information about the request, if applicable.
3: Will additionally log the body of the request and the response.
4: Will additionally log transport-level communication with the data source. This includes SSL negotiation.
5: Will additionally log communication with the data source and additional details that may be helpful in troubleshooting problems. This includes interface commands.

Snowflake Properties

Property Setting Description
Name Text The descriptive name for the component.
Basic/Advanced Mode Choice Basic: This mode will build a Query for you using settings from Data Source, Data Selection and Data Source Filter parameters. In most cases, this will be sufficient.
Advanced: This mode will require you to write an SQL-like query. The available fields and their descriptions are documented in the Data Model.
Authentication Method Choice Select an authentication method, which must be setup in advance.
Google uses the OAuth standard for authenticating 3rd party applications. For help authenticating the Gmail Query component, please refer to our Gmail Query Authentication Guide.
Authentication Select Select the set of credentials to use to connect to Gmail. These must be set up in advance, using Project → Manage OAuth. (Only if Authentication Method is set to OAuth.)
Username Text Username, including domain, for the gmail account.
Password Text Login password for the gmail account.Users have the option to store their password inside the component but we highly recommend using the Password Manager option.
SQL Query Text Custom SQL-like query only available during 'Advanced' mode.
Connection Options Parameter A JDBC parameter supported by the Database Driver. The available parameters are explained in the Data Model
They are usually not required as sensible defaults are assumed.
Value A value for the given Parameter.
Data Source Choice Select a data source from the server.
Data Selection Choice Select one or more columns to return from the query.
Data Source Filter Input Column The available input columns vary depending upon the Data Source.
Qualifier Is - Compares the column to the value using the comparator.
Note: Not all comparators will work with all possible profiles.
Choose one of Equal To, Greater than, Less than, Greater than or equal to, Less than or equal to, Like.
Comparator Choose one of Equal To, Greater than, Less than, Greater than or equal to, Less than or equal to.
Value The value to be compared.
Combine Filters Select Select whether to use the defined filters in combination with one another according to either And or Or.
Limit Number Limits the number of rows that are loaded from file.
Primary Keys Select Select one or more columns to be designated as the table's primary key.
Schema Select Select the table schema. The special value, [Environment Default] will use the schema defined in the environment. For more information on using multiple schemas, see this article.
Target Table String Provide a new table name.
Warning: This table will be recreated on each run of the job, and drop any existing table of the same name.
Storage Account Select (Azure Only) Select a Storage Account with your desired Blob Container to be used for staging the data.
Blob Container Select (Azure Only) Select a Blob Container to be used for staging the data.
Staging Select Select a staging setting.
Snowflake Managed: Allow Matillion ETL to create and use a temporary internal stage on Snowflake for staging the data. This stage, along with the staged data, will cease to exist after loading is complete.
(AWS only) Existing Amazon S3 Location: Selecting this will offer the S3 Staging Area property, allowing users to specify a custom staging area on Amazon S3.
(Azure only) Existing Azure Blob Storage Location: Selecting this will offer the Storage Account and Blob Container properties, allowing users to specify a custom staging location on Azure.
(GCP only) Existing Google Cloud Storage Location: Selecting this will offer the GCS Staging Area property, allowing users to specify a custom staging area within Google Cloud Storage.
GCS Staging Area Select (GCP only) The URL and path of the target Google Storage bucket to be used for staging the queried data.
Integration Select Choose your Google Cloud Storage Integration. Integrations are required to permit Snowflake to read data from and write to a Google Cloud Storage bucket. Integrations must be set up in advance of selecting them in Matillion ETL. To learn more about setting up a storage integration, read our Storage Integration Setup Guide.
S3 Staging Area Text (AWS Only) The name of an S3 bucket for temporary storage. Ensure your access credentials have S3 access and permission to write to the bucket. See this document for details on setting up access. The temporary objects created in this bucket will be removed again after the load completes, they are not kept.
This property is available when using an Existing Amazon S3 Location for Staging.
Warehouse Select Choose a Snowflake warehouse that will run the load.
Database Select Choose a database to create the new table in.
Encryption Select (AWS Only) Decide on how the files are encrypted inside the S3 Bucket.This property is available when using an Existing Amazon S3 Location for Staging.
None: No encryption.
SSE KMS: Encrypt the data according to a key stored on KMS.
SSE S3: Encrypt the data according to a key stored on an S3 bucket.
KMS Key ID Select (AWS Only) The ID of the KMS encryption key you have chosen to use in the 'Encryption' property.
Load Options Multiple Select Clean Staged Files: Destroy staged files after loading data. Default is On.
String Null is Null: Converts any strings equal to "null" into a null value. This is case sensitive and only works with entirely lower-case strings. Default is Off.
Recreate Target Table: Choose whether the component recreates its target table before the data load. If Off, the component will use an existing table or create one if it does not exist. Default is On.
File Prefix: Give staged file names a prefix of your choice. The default setting is an empty field.
Trim String Columns: Remove leading and trailing characters from a string column. Default is On
Use Grid Variable: Check this checkbox to use a grid variable. This box is unchecked by default.
Auto Debug Select Choose whether to automatically log debug information about your load. These logs can be found in the Task History and should be included in support requests concerning the component. Turning this on will override any debugging Connection Options.
Debug Level Select The level of verbosity with which your debug information is logged. Levels above 1 can log huge amounts of data and result in slower execution.
1: Will log the query, the number of rows returned by it, the start of execution and the time taken, and any errors.
2: Will log everything included in Level 1, cache queries, and additional information about the request, if applicable.
3: Will additionally log the body of the request and the response.
4: Will additionally log transport-level communication with the data source. This includes SSL negotiation.
5: Will additionally log communication with the data source and additional details that may be helpful in troubleshooting problems. This includes interface commands.

BigQuery Properties

Property Setting Description
Name Text The descriptive name for the component.
Basic/Advanced Mode Choice Basic: This mode will build a Query for you using settings from Data Source, Data Selection and Data Source Filter parameters. In most cases, this will be sufficient.
Advanced: This mode will require you to write an SQL-like query. The available fields and their descriptions are documented in the Data Model.
Authentication Method Choice Select an authentication method, which must be setup in advance.
Google uses the OAuth standard for authenticating 3rd party applications. For help authenticating the Gmail Query component, please refer to our Gmail Query Authentication Guide.
Authentication Select Select the set of credentials to use to connect to Gmail. These must be set up in advance, using Project → Manage OAuth. (Only if Authentication Method is set to OAuth.)
Username Text Username, including domain, for the gmail account.
Password Text Login password for the gmail account.Users have the option to store their password inside the component but we highly recommend using the Password Manager option.
SQL Query Text Custom SQL-like query only available during 'Advanced' mode.
Connection Options Parameter A JDBC parameter supported by the Database Driver. The available parameters are explained in the Data Model
They are usually not required as sensible defaults are assumed.
Value A value for the given Parameter.
Data Source Choice Select a data source from the server.
Data Selection Choice Select one or more columns to return from the query.
Data Source Filter Input Column The available input columns vary depending upon the Data Source.
Qualifier Is - Compares the column to the value using the comparator.
Note: Not all comparators will work with all possible profiles.
Choose one of Equal To, Greater than, Less than, Greater than or equal to, Less than or equal to, Like.
Comparator Choose one of Equal To, Greater than, Less than, Greater than or equal to, Less than or equal to.
Value The value to be compared.
Combine Filters Select Select whether to use the defined filters in combination with one another according to either And or Or.
Limit Number Limits the number of rows that are loaded from file.
Project Text The target BigQuery project to load data into.
Dataset Text The target BigQuery dataset to load data into.
Target Table String Provide a new table name.
Warning: This table will be recreated on each run of the job, and drop any existing table of the same name.
Cloud Storage Staging Area Text The URL and path of the target Google Storage bucket to be used for staging the queried data.
Load Options Multiple Select Clean Cloud Storage Files: Destroy staged files on Cloud Storage after loading data. Default is On.
Cloud Storage File Prefix: Give staged file names a prefix of your choice. The default setting is an empty field.
Recreate Target Table: Choose whether the component recreates its target table before the data load. If Off, the component will use an existing table or create one if it does not exist. Default is On.
Use Grid Variable: Check this checkbox to use a grid variable. This box is unchecked by default.
Auto Debug Select Choose whether to automatically log debug information about your load. These logs can be found in the Task History and should be included in support requests concerning the component. Turning this on will override any debugging Connection Options.
Debug Level Select The level of verbosity with which your debug information is logged. Levels above 1 can log huge amounts of data and result in slower execution.
1: Will log the query, the number of rows returned by it, the start of execution and the time taken, and any errors.
2: Will log everything included in Level 1, cache queries, and additional information about the request, if applicable.
3: Will additionally log the body of the request and the response.
4: Will additionally log transport-level communication with the data source. This includes SSL negotiation.
5: Will additionally log communication with the data source and additional details that may be helpful in troubleshooting problems. This includes interface commands.

Variable Exports

This component makes the following values available to export into variables:

Source Description
Time Taken To Stage The amount of time (in seconds) taken to fetch the data from the data source and upload it to storage.
Time Taken To Load The amount of time (in seconds) taken to execute the COPY statement to load the data into the target table from storage.