Flex Connector Advanced Settings
  • 17 Apr 2024
  • 3 Minutes to read
  • Dark
    Light
  • PDF

Flex Connector Advanced Settings

  • Dark
    Light
  • PDF

Article Summary

Warning:

Although we usually do not recommend changing advanced settings unless you are an experienced Panoply user. In the case of the Flex Connector, there are some changes that might be required

For users who have some experience working with their data in Panoply, there are a number of items that can be customized for this connector.

  1. Downloadable link: Select downloadable link when the API response includes a URL to download a file or retrieve the data
  2. URL key: Path to the URL in response object. Only available when downloadable link is selected. For example: results.link
  3. Data key: Path to the data in response object. Only available when downloadable link is not selected. For example: data.products
  4. Array to flatten: Enter the path to an array you want to flatten. For example: dict1.dict2.array_title
Notes
  • Flattening an array will duplicate the number of records in the table. For example, a record with an array with 5 elements in it will generate 5 records in the destination table.
  • You can only flatten one array in each Flex Connector.
  1. Skip XML attributes: When connecting to an API that returns XML results, some of the returning XML fields might have attributes attached to them. Select this option to skip all of the XML attributes and ingest only the XML values. For example, for the data <score type="integer" id="30">100</score>, Panoply will ingest the value 100 to the score column
  2. Skip rows in CSV response: Select this option to skip the first N rows in a CSV response.
  3. Number of rows to skip: When selecting to skip rows in CSV response, enter the number of rows to skip
  4. Errors waiting time: Enter error codes and waiting time in seconds in the following format: error_code=waiting_time; error_code2=waiting_time2.
  5. Primary Key: Primary Keys are the column(s) values that uniquely identify a row. Once identified, Panoply upserts new data and prevents duplicate data.
    Panoply automatically selects the Primary Key using the available ID columns. If none are available, you may configure this manually by choosing the columns to use.
  6. Incremental Key: Enter the desired incremental key based on the destination table. The incremental value will be used in the next succeeding Flex Collector collection. You will need to use {incval} in the URL parameters or POST data. The format of the incremental value must match the API's expected format as part of the API call. When using the {incval} in the API, you can encapsulate it with multiple different date functions (functions are identified by << >>):
    1. date_add(date, period, value) - Add or subtract specific periods from the date value
      1. date - the date value or {incval}
      2. period - The period type. Acceptable values: 'seconds', 'minutes', 'hours', 'days', 'months' or 'years'.
      3. amount - The amount of periods to add or subtract
    2. date_format(date, pattern) - Change the date format to any format.
      1. date - the date value or {incval}
      2. pattern - The desired date pattern. See available values here
    3. to_timestamp(date) - Returns the Epoch Unix timestamp of the given date
    4. utcnow() - Return the current UTC timestamp
Example

<<to_timestamp(date_add('{incval}', 'months', -1))>>

  1. Destination Schema: This is the name of the target schema to save the data. The default schema for data warehouses built on Google BigQuery is panoply. The default schema for data warehouses built on Amazon Redshift is public. This cannot be changed once a source has been collected.
  2. Exclude: The Exclude option allows you to exclude certain data, such as names, addresses, or other personally identifiable information. Enter the column names of the data to exclude.
  3. Truncate: Truncate deletes all the current data stored in the destination tables, but not the tables themselves. Afterwards Panoply will recollect all the available data for this connector.
  4. Lock Schema: Lock schema will block any tables' schema changes like adding new columns, changing of data types or adding new tables.
  5. Load Strategy: Control the ingestion behavior of existing records. You can either set it to Upsert (Update existing and insert new records) or Append (Always insert the records). The default strategy is Upsert.
  6. Nested Data: Control the ingestion behavior of nested objects. You can either set it to create one-to-many tables (default behavior) or flatten the nested data to the parent table
  7. Click Save Changes then click Collect.
  • The connector appears grayed out while the collection runs.
  • You may add additional connectors while this collection runs.
  • You can monitor this collection from the Jobs page or the Connectors page.
  • After a successful collection, navigate to the Tables page to review the data results.

Was this article helpful?