ShaunTest

ShaunTest


Overview

The LinkedIn Query component uses the LinkedIn API to retrieve data and load it into a table. This action stages the data, so the table is reloaded each time. Users can then transform their data with Matillion ETL's library of transformation components.

Warning: This component is potentially destructive. If the target table undergoes a change in structure, it will be recreated. Otherwise, the target table is truncated. Setting the Load Option "Recreate Target Table" to Off will prevent both recreation and truncation. Do not modify the target table structure manually.





Properties

The available parameters to configure the LinkedIn Query component are listed below.

Documentation is presented in the following format:

propertyName | settingType | required/Optional

A description of the property, including user actions and default settings.


Name | String | Required

Provide a human-readable name for the component.


Basic/Advanced Mode | Select | Required

Basic: This mode will create a LinkedIn Query using the settings selected in the Data Source, Data Selection, and Data Source Filter parameters. In most cases, this mode will be sufficient.

Advanced: This mode will require users to write an SQL-like query, which is translated into one or more LinkedIn API calls. The available fields and their descriptions are documented in the Linkedin data model.

While the query is exposed in an SQL-like language, the exact semantics can be surprising, for example, filtering on a column can return more data than not filtering on it. This is an impossible scenario with regular SQL.


Authentication | Select | Required

Select an OAuth from the dropdown menu. OAuth entries must be set up in advance. To learn how to create and authorise an OAuth entry, please read our LinkedIn Query Authentication Guide.


Connection Options | Parameter=Value | Optional

Parameter: A JDBC parameter supported by the database driver. The available parameters are given in the Linkedin data model. Sensible defaults are usually automatically configured.

Value: The value for the corresponding parameter.


Data Source | Select | Required

Select a LinkedIn data source to load.


Data Selection | Column Select | Required

Select which columns from the data source table to load into the query using the arrow buttons.


Data Source Filter | SQL Filter | Optional

Input Column: The available input columns vary depending upon the Data Source.

Qualifier:

  • Is: Compares the column to the value using the comparator.
  • Not: Reverses the effect of the comparison, so "Equals" becomes "Not equals", "Less than" becomes "Greater than or equal to", etc.

Comparator: Choose a method of comparing the column to the value. Possible comparators include: "Equal to", "Greater than", "Less than", "Greater than or equal to", "Less than or equal to", "Like", and "Null". "Equal to" can match exact strings and numeric values, while other comparators, such as "Greater than" will only work with numerics. The "Like" comparator allows the wildcard character (%) to be used at the start and end of a string value to match a column. The Null comparator matches only Null values, ignoring whatever the value is set to. Not all data sources support all comparators, thus it is likely that only a subset of the above comparators will be available to choose from.

Value: The value to be compared.

Input Column Qualifier Comparator Value
date Is Greater than or equal to 2005


SQL Query| SQL statement | Required

Write an SQL-like query in conjunction with the Linkedin data model.

When referencing a specific account, use the form ('act_<ACCOUNTNUMBER>') where the account number should be a 17-digit number. For example: SELECT * FROM ads WHERE target in ('act_12345678901234567')


Combine Filters | Select | Optional

Select whether to use the defined filters in combination with one another according to either And or Or operators.


Limit | Integer | Optional

Set a numeric value to limit the number of rows that can be loaded.


Type | Select | Required

Select the table type.

  • Standard: The data will be staged on an S3 bucket, before being loaded into a table.
  • External: The data will be put into an S3 bucket and referenced by an external table.


Schema| Select | Required

Select the table schema. The special value, [Environment Default], will use the schema defined in the environment. For more information on using multiple schemas, see this article.

An external schema is required if the Type property is set to External.


Target Table | String | Required

Provide a human-readable name for the new table.

Upon each run of this job, the table will be recreated and will drop any existing table of the same name.


Location | S3 Bucket | Required

Specify an S3 bucket to store the data in. Once the data is on an S3 bucket, it can be referenced by an external table.

This property is only available when the Type property is set to External


S3 Staging Area | Select | Required

Select an S3 bucket from the dropdown list. Ensure that your access credentials have S3 access, as well as permission to write to the bucket. See this document for details on setting up access.

The temporary objects created in this bucket will be removed again after the load completes, they are not kept.


Distribution Style | Select | Optional

Select the distribution style.

  • All: Copy rows to all nodes in the Redshift cluster.
  • Auto: Allow Redshift to manage your distribution style.
  • Even: Distribute rows around the Redshift cluster evenly.
  • Key: Distribute rows around the Redshift cluster according to the value of a key column.

Default: Auto

Table distribution is critical to good performance. See the Amazon Redshift documentation for more information.


Sort Key | Column Select | Optional

Specify one or more columns from the table to be set as the table's sort key.

Sort keys are critical to good performance - see the Amazon Redshift documentation for more information.


Sort Key Options | Select | Optional

Select whether the sort key is of a compound or interleaved variety—see the Amazon Redshift documentation for more information.

This property is only available if a sort key is specified.


Primary Key | Column Select | Optional

Select one or more columns to be designated as the table's primary key.


Load Options | Multiple Select | Optional

  • Comp Update: Apply automatic compression to the target table. Default is On.
  • Stat Update: Automatically update statistics when filling a table—in this case, the target table. Default is On.
  • Clean S3 Objects: Automatically remove UUID-based objects on the S3 bucket. Effectively, users decide here whether to keep the staged data in the S3 bucket or not. Default is On.
  • String Null is Null: Converts any strings equal to "null" into a null value. This is case sensitive and only works with entirely lower-case strings. Default is On.
  • Recreate Target Table: Choose whether the component recreates its target table before the data load. If Off, the existing table will be used. Default is On.
  • File Prefix: Give staged file names a prefix of your choice. When this Load Option is selected, users should set their preferred prefix in the text field.
  • Use Grid Variable: Check this checkbox to use a grid variable. This box is unchecked by default.


Encryption | Select | Optional

Select the encryption method for files inside the S3 bucket.

  • None: No encryption.
  • SSE KMS: Encrypt the data according to a key stored on KMS.
  • SSE S3: Encrypt the data according to a key stored on an S3 bucket.


KMS Key ID | Select | Optional

The ID of the KMS encryption key you have chosen to use in the Encryption property.


Auto Debug | ON/OFF | Optional

Select whether to automatically log debug information about your load. These logs can be found in the Task History and should be included in support requests concerning the component. Turning this on will override any debugging Connection Options.


Debug Level | Select | Optional

Select the level of verbosity with which your debug information is logged. Levels above 1 can log huge amounts of data and result in slower execution.

  • 1: Will log the query, the number of rows returned by it, the start of execution and the time taken, and any errors.
  • 2: Will log everything included in Level 1, plus cache queries and additional information about the request, if applicable.
  • 3: Will additionally log the body of the request and the response.
  • 4: Will additionally log transport-level communication with the data source. This includes SSL negotiation.
  • 5: Will additionally log communication with the data source, as well as additional details that may be helpful in troubleshooting problems. This includes interface commands.

This property is only available when Auto Debug is set to On.