PostgreSQL

PostgreSQL

This document describes the PostgreSQL data source. Continue reading to learn more about:

  • Collecting - what should you know about adding the data source.
  • Data Dictionary - what data is available and how it is structured.

Collecting

Before you start

  • Note the name, the host, and the port of the Postgres database.
  • Note the username and password for the user connecting to the Postgres database.

Note: A known issue exists when queries are run against the replication server and the primary server updates before the query has completed. This issue can be solved by setting some parameters in the postgresql.conf file on the replica instance. For example, set max_standby_archive_delay and max_standby_streaming_delay to a reasonable value. You can also set hot_standby_feedback to on, however this can bloat the primary. For more information, see your administrator or Hot Standby.

To configure this data source:

  1. If necessary, whitelist Panoply.
    • Postgres databases with production data are typically not publicly available. To allow Panoply to access your data, see Whitelisting.
  2. Click Data Sources in the navigation menu.
  3. Click the Add Data Source button.
  4. Search for Postgres and select it.
  5. Enter the credentials to connect to Postgres. If you’re not sure what your connection details are, contact your administrator or open the postgresql.conf file, which is normally kept in the data directory. For more on this file and the relevant connection settings, see Connections and Authentication in the Postgres documentation.
    • Host Address - The URL of the Postgres database or the IP address of the host server.
      • URL example: your.server.com
      • IP example: 123.45.67.89
    • Port - The port number of the Postgres server. This is 5432 for most connections.
  6. Enter your Postgres username and password. This user must have permission to access the data. If the permissions are not in place, some of the data will not be available.
    • In Postgres, you can create a Panoply-specific user with read-level permissions to binary logs. Then enter the username and password for this Panoply-specific user. This user must be reserved for Panoply use and must be a unique to your connector. All information entered into Panoply is encrypted to ensure the security of your data. See Data Protection for more information on how Panoply actively provides data security.
  7. Enter the PostgreSQL database to connect to. This loads a list of tables and views.
  8. Select the PostgreSQL tables and views from which to collect data.
  9. (Optional) Set the Advanced Settings. We do not recommend changing advanced settings unless you are an experienced Panoply user.
    • Destination - Panoply selects a default destination. The default destination is postgres_<table or view name> , where <table or view name> is a dynamic field. For example, for a table or view name customers, the default destination table is postgres_customers.
    • For more detailed descriptions of Advanced Settings for the PostgreSQL Data Source, see the Data Dictionary below.
  10. Click Save Changes and then click Collect.
    • The data source appears grayed out while the collection runs.
    • You may add additional data sources while this collection runs.
    • You can monitor this collection from the Jobs page or the Data Sources page.
    • After a successful collection, navigate to the Tables page to review the data results.

Data Dictionary

Because Postgres data comes from a database system, Panoply cannot provide a data dictionary. But Panoply does automate the data schema for the collected data. This section includes useful information about the Panoply automations. You can adjust these settings in your data source under Advanced Options.

  • Destination - Panoply selects the default destination for the tables where data is stored.
    • The default destination is postgres_<table or view name> , where <table or view name> is a dynamic field. For example, for a table or view name customers, the default destination table is postgres_customers.
    • To prefix all table names with your own prefix, use this syntax: prefix_<table or view name> where prefix is your desired prefix name and <table or view name> is a variable representing the tables and views to be collected.
  • Primary Key - The Primary Key is the field or combination of fields that Panoply will use as the deduplication key when collecting data. Panoply sets the primary key depending on the scenario identified in the following table. To learn more about primary keys in general, see Primary Keys.
Postgres id column Enter a primary key Outcome
yes no Panoply will automatically select the id column and use it as the primary key.
yes yes Not recommended. Panoply will use the id column but will overwrite the original source values.
If you want Panoply to use your database table’s id column, do not enter a value into the Primary Key field.
no no Panoply creates an id column formatted as a GUID, such as 2cd570d1-a11d-4593-9d29-9e2488f0ccc2.
no yes Panoply creates a hashed id column using the primary key values entered, while retaining the source columns. WARNING: Any user-entered primary key will be used across all the Postgres tables selected.
  • Incremental Key - By default, Panoply fetches all of your PostgreSQL data on each run. If you only want to collect some of your data, enter a column name to use as your incremental key. The column must be logically incremental. Panoply will keep track of the maximum value reached during the previous run and will start there on the next run.
    • Incremental Key configurations
      • If no Incremental Key is configured by the user, by default, Panoply collects all the PostgreSQL data on each run for the PostgreSQL tables or views selected.
      • If the Incremental Key is configured by column name, but not the column value, Panoply collects all data, and then automatically configures the column value at the end of a successful run.
      • If the Incremental Key is configured by column name and the column value (manually or automatically), then on the first collection, Panoply will use that value as the place to begin the collection.
        • The value is updated at the end of a successful collection to the last value collected.
        • In future collections, the new value is used as the starting value. So in future collections Panoply looks for data where the IK value is greater than where the collection ended.
    • When an Incremental Key is configured, Panoply will look for that key in each of the selected tables and views. If the table or view does not have the column indicated as the Incremental Key, it must be collected as a separate instance of the data source.
    • A table or view may have some records may have a ‘null’ value for the incremental key, or they may not capture the incremental key at all. In these situations Panoply omits these records instead of failing the entire data source.

WARNING: If you set an incremental key, you can only collect one table per instance of PostgreSQL.

A column in a table uses the same data type for all values in that column. Panoply automatically chooses the data type for each column based on the available values. This is important to note for this data source. If even one value in a column has text, then the entire column is considered data type Text.

  • The following metadata columns are added to the destination table(s):

    • __databasename - The name of the Postgres database where the data originated.
    • __tablename - The name of the source table in Postgres.
    • id - If you do not select a primary key, and no id column exists in the source table, Panoply will insert an id. Formatted as a GUID, such as 2cd570d1-a11d-4593-9d29-9e2488f0ccc2.
    • __senttime - Formatted as a datetime, such as 2020-04-26T01:26:14.695Z.
    • __updatetime - Formatted as a datetime, such as 2020-04-26T01:26:14.695Z.
    • __state - Reserved for internal Panoply use.

Data Type Mapping

Getting started is easy! Get all your data in one place in minutes.
Try Free