-
DarkLight
Zendesk Chat Extract
-
DarkLight

Zendesk Chat Extract
The Zendesk Chat Extract component calls the Zendesk Chat API to retrieve and store data to be either referenced by an external table or loaded into a table, depending on the user's cloud data warehouse. Users can then transform their data with the Matillion ETL library of transformation components.
Using this component may return structured data that requires flattening. For help with flattening such data, we recommend using the Nested Data Load Component for Amazon Redshift and the Extract Nested Data Component for Snowflake or Google BigQuery.
Properties
Snowflake Properties | ||
---|---|---|
Property | Setting | Description |
Name | String | A human-readable name for the component. |
Data Source | Select | Please select the Zendesk Chat data source from the available options. |
OAuth | Select | The name of the OAuth entry that has been configured for this service. For help with creating and authorising an OAuth entry, please refer to our Zendesk Chat Authentication Guide. |
Start Time | Timestamp | Takes a Unix epoch time in seconds. To learn more, see EpochConverter. Only available when Data Source is set to one of the following: Incremental Agent Timeline, Incremental Chats, Incremental Conversions, or Incremental Department Events. |
Start Id | String | Only available when Data Source is set to one of the following: Incremental Agent Timeline, Incremental Chats, Incremental Conversions, or Incremental Department Events. |
Limit | Integer | The maximum number of records (rows). Only available when Data Source is set to one of the following: Incremental Agent Timeline, Incremental Chats, Incremental Conversions, or Incremental Department Events. |
Page Limit | Number | Limit the number of pages to stage. |
Location | Storage Location | Provide an S3 bucket path, GCS bucket path, or Azure Blob Storage path that will be used to store the data. Once on an S3 bucket, GCS bucket or Azure Blob, the data can be referenced by an external table. A folder will be created at this location with the same name as the Target Table. |
Integration | Select | Choose your Google Cloud Storage Integration. Integrations are required to permit Snowflake to read data from and write to a Google Cloud Storage bucket. Integrations must be set up in advance of selecting them in Matillion ETL. To learn more about setting up a storage integration, read our Storage Integration Setup Guide. |
Warehouse | Select | Choose a Snowflake warehouse that will run the load. |
Database | Select | Choose a database to create the new table in. |
Schema | Select | Select the table schema. The special value, [Environment Default], will use the schema defined in the environment. For more information on using multiple schemas, please refer to this article. |
Target Table | String | Provide a new table name. Warning: Upon running the job, this table will be recreated and will drop any existing table of the same name. |
Redshift Properties | ||
---|---|---|
Property | Setting | Description |
Name | String | A human-readable name for the component. |
Data Source | Select | Please select the Zendesk Chat data source from the available options. |
OAuth | Select | The name of the OAuth entry that has been configured for this service. For help with creating and authorising an OAuth entry, please refer to our Zendesk Chat Authentication Guide. |
Start Time | Timestamp | Takes a Unix epoch time in seconds. To learn more, see EpochConverter. Only available when Data Source is set to one of the following: Incremental Agent Timeline, Incremental Chats, Incremental Conversions, or Incremental Department Events. |
Start Id | String | Only available when Data Source is set to one of the following: Incremental Agent Timeline, Incremental Chats, Incremental Conversions, or Incremental Department Events. |
Limit | Integer | The maximum number of records (rows). Only available when Data Source is set to one of the following: Incremental Agent Timeline, Incremental Chats, Incremental Conversions, or Incremental Department Events. |
Page Limit | Number | Limit the number of pages to stage. |
Location | Storage Location | Provide an S3 bucket path that will be used to store the data. Once on an S3 bucket, the data can be referenced by an external table. A folder will be created at this location with the same name as the target table. |
Type | Dropdown | Select between a standard table and an external table. |
Standard Schema | Dropdown | Select the Redshift schema. The special value, [Environment Default], will use the schema defined in the Matillion ETL environment. |
External Schema | Select | Select the table's external schema. To learn more about external schemas, please read our support documentation target="_blank">Getting Started With Amazon Redshift Spectrum. |
Target Table | String | Provide a name for the external table to be used. Warning: Upon running the job, this table will be recreated and will drop any existing table of the same name. |
BigQuery Properties | ||
---|---|---|
Property | Setting | Description |
Name | String | A human-readable name for the component. |
Data Source | Select | Please select the Zendesk Chat data source from the available options. |
OAuth | Select | The name of the OAuth entry that has been configured for this service. For help with creating and authorising an OAuth entry, please refer to our Zendesk Chat Authentication Guide. |
Start Time | Timestamp | Takes a Unix epoch time in seconds. To learn more, see EpochConverter. Only available when Data Source is set to one of the following: Incremental Agent Timeline, Incremental Chats, Incremental Conversions, or Incremental Department Events. |
Start Id | String | Accepts a valid chat ID or a valid agent ID. Only available when Data Source is set to one of the following: Incremental Agent Timeline, Incremental Chats, Incremental Conversions, or Incremental Department Events. |
Limit | Integer | The maximum number of records (rows). Only available when Data Source is set to one of the following: Incremental Agent Timeline, Incremental Chats, Incremental Conversions, or Incremental Department Events. |
Page Limit | Integer | Set the page limit for the amount of records to be returned and staged. |
Table Type | Select | Select whether the table is Native (by default in BigQuery) or an external table. |
Project | Select | Select the Google Cloud project. The special value, [Environment Default], will use the project defined in the environment. To learn more, read Creating and managing projects. |
Dataset | Select | Select the Google BigQuery dataset to load data into. The special value, [Environment Default], will use the dataset defined in the environment. To learn more, read Introduction to datasets. |
Target Table | String | A name for the table. Warning: This table will be recreated and will drop any existing table of the same name. Only available when the table type is Native. |
New Target Table | String | A name for the new external table. Only available when the table type is External. |
Cloud Storage Staging Area | Cloud Storage Bucket | Specify the target Google Cloud Storage bucket to be used for staging the queried data. Users can either:
Only available when the table type is Native. |
Location | Cloud Storage Bucket | Specify the target Google Cloud Storage bucket to be used for staging the queried data. Users can either:
Only available when the table type is External. |
Load Options | Multiple Select | Clean Cloud Storage Files: Destroy staged files on Cloud Storage after loading data. Default is On. Cloud Storage File Prefix: Give staged file names a prefix of your choice. The default setting is an empty field. Recreate Target Table: Choose whether the component recreates its target table before the data load. If Off, the component will use an existing table or create one if it does not exist. Default is On. Use Grid Variable: Check this checkbox to use a grid variable. This box is unchecked by default. |