Zuora Bulk Query
-
DarkLight
Zuora Bulk Query
-
DarkLight
This article is specific to the following platforms - Snowflake - BigQuery.

Zuora Bulk Query
The Zuora Bulk Query component uses the Zuora AQuA API to retrieve data and load it into a table. This action stages the data, so the table is reloaded each time. Users can then transform their data with the Matillion ETL library of transformation components.
Important Information
- In certain use cases with this component, when particular data sources are selected, additional unexpected columns may appear in the returned data that were not actually appearing within data selection configuration. Users can circumvent this issue by unchecking the Automatically include additional Currency Conversion information in data source exports checkbox within Zuora.
- To do this, within Zuora navigate to Finance Settings → Manage Currency Conversion and check or uncheck the box. For more information, please consult the Zuora documentation.
- Warning: This component is potentially destructive. If the target table undergoes a change in structure, it will be recreated. Otherwise, the target table is truncated. Where applicable, setting the Load Option Recreate Target Table to "Off" will prevent both recreation and truncation. Do not modify the target table structure manually.
Properties
Snowflake Properties | ||
---|---|---|
Property | Setting | Description |
Name | String | A human-readable name for the component. |
Basic/Advanced Mode | Select | Basic: this mode constructs a Zuora query using user-specified settings from the Data Source, Data Selection, and Data Source Filter properties. This is the default setting. Advanced: this mode requires users to write an SQL-like query, which is translated into one or more Zuora AQuA API calls. |
Authentication Method | Select | Select the authentication method. Users can choose between an OAuth, configured via Manage OAuth, or a username and password attached to a Zuora account. |
Endpoint | Select | Select the Zuora endpoint for the service. Users can select one of EU Central Sandbox, EU Production, EU Sandbox, US Production, or US Sandbox. |
ZOQL Query Type | Select | Select the ZOQL query type through which to create a data source export via the Zuora API. zoql: Zuora Object Query Language. zoqlexport: Zuora Export Zuora Object Query Language. When "zoqlexport" is selected, users will find that the table name is appended to the column name in the return data—this is not the case when "zoql" is selected. Certain data sources will only work when this property is set to "zoqlexport". |
Show Deleted Records | Select | Yes: the query will return both deleted and active records. No: the query will only return active records. This is the default setting. This property is only available when ZOQL Query Type is set to zoqlexport. |
Authentication | Select | Select an OAuth entry to authenticate this component. An OAuth entry must be set up in advance. To learn how to create and authorise an OAuth entry, please read our Zuora Bulk Query Authentication Guide. |
Username | String | Input a valid Zuora username. |
Password | String | Input the password that corresponds to the Zuora username. |
Max Wait Minutes | Integer | Specify the maximum number of minutes to wait for the query to return before the job times out. The default is 60 minutes. |
Entity ID | String | Specify the Zuora entity to pull data from. |
ZOQL Query | String | Write a query in ZOQL. This property is only available when Basic/Advanced Mode is set to "Advanced". |
Data Source | Select | Select a data source. |
Data Selection | Select | Select one or more columns from the chosen data source to return from the query. |
Data Source Filter | Input Column | The available input columns vary depending upon the data source. |
Qualifier | Is: Compares the column to the value using the comparator. Not: Reverses the effect of the comparison, so "Equals" becomes "Not equals", "Less than" becomes "Greater than or equal to", etc. |
|
Comparator | Choose a method of comparing the column to the value. Possible comparators include: "Equal to", "Greater than", "Less than", "Greater than or equal to", "Less than or equal to", "Like", "Null". "Equal to" can match exact strings and numeric values, while other comparators, such as "Greater than", will work only with numerics. The "Like" operator allows the wildcard character (%) to be used at the start and end of a string value to match a column. The "Null" operator matches only null values, ignoring whatever the value is set to. Not all data sources support all comparators, thus it is likely that only a subset of the above comparators will be available to choose from. |
|
Value | The value to be compared. | |
Combine Filters | Select | Use the defined filters in combination with one another according to either And or Or. |
Type | Select | Choose between using a standard table or an external table. External: The data will be put into an S3 bucket and referenced by an external table. Standard: The data will be staged on an S3 bucket before being loaded into a table. This is the default setting. |
Warehouse | Select | Choose a Snowflake warehouse that will run the load. |
Database | Select | Choose a database to create the new table in. |
Schema | Select | Select the table schema. The special value, [Environment Default], will use the schema defined in the environment. For more information on using multiple schemas, see this article. |
Target Table | String | Provide a new table name. Warning: This table will be recreated and will drop any existing table of the same name. |
Stage | Select | Select a managed stage. The special value, [Custom], will create a stage "on the fly" for use solely within this component. Selecting [Custom] provides all the properties typically seen in the Manage Stages dialog for your input. If you select a managed stage that has already been configured in Manage Stages, the additional properties are not provided, as they have already been configured. Manage Stages can be found by clicking the Environments panel in the lower-left, then right-clicking an environment. To learn more, read Manage Stages. |
Stage Platform | Select | Select a staging setting. Snowflake Managed: Allow Matillion ETL to create and use a temporary internal stage on Snowflake for staging the data. This stage, along with the staged data, will cease to exist after loading is complete. Existing Amazon S3 Location: Activates the S3 Staging Area property, allowing users to specify a custom staging area on Amazon S3. The Stage Authentication property is also activated, letting users select a method of authenticating the data staging. Existing Azure Blob Storage Location: Activates the Storage Account and Blob Container properties, allowing users to specify a custom staging location on Azure. The Stage Authentication property is also activated, letting users select a method of authenticating the data staging. Existing Google Cloud Storage Location: Activates the GCS Staging Area property, allowing users to specify a custom staging area within Google Cloud Storage. |
Stage Authentication | Select | Select an authentication method for data staging. Credentials: Uses the credentials configured in the Matillion ETL environment. If no credentials have been configured, an error will occur. Storage Integration: Use a Snowflake storage integration to authentication data staging. A storage integration is a Snowflake object that stores a generated identity and access management (IAM) entity for your external cloud storage, along with an optional set of allowed or blocked storage locations. To learn more, read Create Storage Integration. |
Storage Integration | Select | Select a Snowflake storage integration from the dropdown list. Storage integrations are required to permit Snowflake to read data from and write to your cloud storage location (Amazon S3, Microsoft Azure, Google Cloud Storage) and must be set up in advance of selection. To learn more about setting up a storage integration for use in Matillion ETL, read Storage Integration Setup Guide. This property is only available when Stage Authentication is set to Storage Integration. |
S3 Staging Area | Select | Select an S3 bucket for temporary storage. Ensure your access credentials have S3 access and permission to write to the bucket. Read Manage Credentials for details on setting up access. The temporary objects created in this bucket will be removed again after the load completes, they are not kept. |
Use Accelerated Endpoint | Boolean | When True, data will be loaded via the s3-accelerate endpoint. Please consider the following information:
|
Storage Account | Select | Select a storage account with your desired blob container to be used for staging the data. For more information, read Storage account overview. |
Blob Container | Select | Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage. |
GCS Staging Area | Select | The URL and path of the target Google Storage bucket to be used for staging the queried data. For more information, read Creating storage buckets. |
Encryption | Select | Decide how the files are encrypted inside the S3 bucket. This property is available when using an existing Amazon S3 location for staging. None: No encryption. SSE KMS: Encrypt the data according to a key stored on KMS. Read AWS Key Management Service (AWS KMS) to learn more. SSE S3: Encrypt the data according to a key stored on an S3 bucket. Read Using server-side encryption with Amazon S3-managed encryption keys (SSE-S3) to learn more. |
KMS Key ID | Select | The ID of the KMS encryption key you have chosen to use in the Encryption property. |
Load Options | Multiple Select | Clean Staged Files: Destroy staged files after loading data. Default is On. String Null is Null: Converts any strings equal to "null" into a null value. This is case-sensitive and only works with entirely lower-case strings. Default is Off. Recreate Target Table: Choose whether the component recreates its target table before the data load. If Off, the component will use an existing table or create one if it does not exist. Default is On. File Prefix: Give staged file names a prefix of your choice. The default setting is an empty field. Trim String Columns: Remove leading and trailing characters from a string column. Default is On Compression Type: Set the compression type to either gzip or None. The default is gzip. Use Grid Variable: Check this checkbox to use a grid variable. This box is unchecked by default. |
New Table Name | String | Specify the name of the new table to be created. This property is only available when Type is set to External. |
Stage Database | Select | (Specify the stage database. The special value, [Environment Default], will use the database defined in the environment. This property is only available when Type is set to External. |
Stage Schema | Select | Specify the stage schema. The special value, [Environment Default], will use the schema defined in the environment. This property is only available when Type is set to External. |
Stage | Select | Select a stage. This property is only available when Type is set to External. |
BigQuery Properties | ||
---|---|---|
Property | Setting | Description |
Name | String | A human-readable name for the component. |
Basic/Advanced Mode | Select | Basic: this mode constructs a Zuora query using user-specified settings from the Data Source, Data Selection, and Data Source Filter properties. This is the default setting. Advanced: this mode requires users to write an SQL-like query, which is translated into one or more Zuora AQuA API calls. |
Authentication Method | Select | Select the authentication method. Users can choose between an OAuth, configured via Manage OAuth, or a username and password attached to a Zuora account. |
Endpoint | Select | Select the Zuora endpoint for the service. Users can select one of EU Central Sandbox, EU Production, EU Sandbox, US Production, or US Sandbox. |
ZOQL Query Type | Select | Select the ZOQL query type through which to create a data source export via the Zuora API. zoql: Zuora Object Query Language. zoqlexport: Zuora Export Zuora Object Query Language. When "zoqlexport" is selected, users will find that the table name is appended to the column name in the return data—this is not the case when "zoql" is selected. Certain data sources will only work when this property is set to "zoqlexport". |
Show Deleted Records | Select | Yes: the query will return both deleted and active records. No: the query will only return active records. This is the default setting. This property is only available when ZOQL Query Type is set to zoqlexport. |
Authentication | Select | Select an OAuth entry to authenticate this component. An OAuth entry must be set up in advance. To learn how to create and authorise an OAuth entry, please read our Zuora Bulk Query Authentication Guide. |
Username | String | Input a valid Zuora username. |
Password | String | Input the password that corresponds to the Zuora username. |
Max Wait Minutes | Integer | Specify the maximum number of minutes to wait for the query to return before the job times out. The default is 60 minutes. |
ZOQL Query | String | Write a query in ZOQL. This property is only available when Basic/Advanced Mode is set to "Advanced". |
Data Source | Select | Select a data source. |
Data Selection | Select | Select one or more columns from the chosen data source to return from the query. |
Data Source Filter | Input Column | The available input columns vary depending upon the data source. |
Qualifier | Is: Compares the column to the value using the comparator. Not: Reverses the effect of the comparison, so "Equals" becomes "Not equals", "Less than" becomes "Greater than or equal to", etc. |
|
Comparator | Choose a method of comparing the column to the value. Possible comparators include: "Equal to", "Greater than", "Less than", "Greater than or equal to", "Less than or equal to", "Like", "Null". "Equal to" can match exact strings and numeric values, while other comparators, such as "Greater than", will work only with numerics. The "Like" operator allows the wildcard character (%) to be used at the start and end of a string value to match a column. The "Null" operator matches only null values, ignoring whatever the value is set to. Not all data sources support all comparators, thus it is likely that only a subset of the above comparators will be available to choose from. |
|
Value | The value to be compared. | |
Combine Filters | Select | Use the defined filters in combination with one another according to either And or Or. |
Project | Select | The target BigQuery project to load data into. |
Dataset | Select | The target BigQuery dataset to load data into. |
Target Table | Text | Provide a new table name. Warning: This table will be recreated and will drop any existing table of the same name. |
Cloud Storage Staging Area | Filepath | The URL and path of the target Google Storage bucket to be used for staging the queried data. |
Load Options | Multiple Select | Clean Cloud Storage Files: Destroy staged files on Cloud Storage after loading data. Default is On. Cloud Storage File Prefix: Give staged file names a prefix of your choice. The default setting is an empty field. Recreate Target Table: Choose whether the component recreates its target table before the data load. If Off, the component will use an existing table or create one if it does not exist. Default is On. Use Grid Variable: Check this checkbox to use a grid variable. This box is unchecked by default. |
Synapse Properties | ||
---|---|---|
Property | Setting | Description |
Variable Exports
This component makes the following values available to export into variables:
Source | Description |
---|---|
Time Taken To Stage | The amount of time (in seconds) taken to fetch the data from the data source and upload it to storage. |
Time Taken To Load | The amount of time (in seconds) taken to execute the COPY statement to load the data into the target table from storage. |