Send query results to Tableau

Tableau is a visual analytics platform that enables people and organizations to make the most of their data. Tableau connects to a data source, and then queries that data directly.

Important

Data is not sent from Amperity directly to Tableau. Tableau must connect to a location that supports queries to that data; Tableau cannot connect directly to a static file. Amperity must send data to Tableau indirectly by configuring a destination to:

  1. Send a CSV file to an Amazon S3 bucket, after which it is picked up by Amazon Redshift.

  2. Send a CSV file to an Azure container, after which it is picked up by Azure Synapse Analytics.

  3. Send Apache Parquet, Apache Avro, CSV, or JSON files to Databricks, and then connect to that data from Tableau.

  4. Send a CSV file to Google Cloud Storage, after which data is transferred to Google BigQuery.

  5. Send Apache Parquet, Apache Avro, CSV, or JSON files to Snowflake, after which you then connect to that data from Tableau.

Tableau may be configured to connect directly to Snowflake, Amazon Redshift, or Azure Synapse Analytics. The destination workflow in Amperity may be configured to send data on a regular basis to ensure that the data available to the Tableau user is up to date.

Connect to Amazon Redshift

Amazon RedShift is a data warehouse located within Amazon Web Services that can handle massive sets of column-oriented data.

Amperity can be configured to send data to Amazon S3, after which Amazon Redshift is configured to load that data from Amazon S3. Tableau can be configured to connect to Amazon Redshift, and use Amperity as a source for data visualizations.

You may use the Amazon S3 bucket that comes with your Amperity tenant for the intermediate step (if your Amperity tenant is running on Amazon AWS). Or you may configure Amperity to send data to an Amazon S3 bucket that your organization manages directly.

To connect Tableau to Amazon Redshift

The steps required to configure Amperity to send data that is accessible to Tableau from Amazon Redshift requires completion of a series of short workflows, some of which must be done outside of Amperity.

Step 1.

Use a query return the data you want to make available to Tableau for use with data visualizations.

Step 2.

Send an Apache Parquet, Apache Avro, CSV, or JSON file to Amazon S3 from Amperity.

Step 3.

Load data from Amazon S3 to Amazon Redshift.

Step 4.

Connect Tableau to Amazon Redshift , and then access the data sent from Amperity.

Step 5.

Validate the workflow within Amperity and the data within Databricks.

Step 6.

Configure Amperity to automate this workflow for a regular (daily) refresh of data.

Connect to Azure Synapse Analytics

Azure Synapse Analytics is a limitless analytics service and data warehouse. Azure Synapse Analytics has four components: SQL analytics, Apache Spark, hybrid data integration, and a unified user experience.

Amperity can be configured to send data to an Azure Blob Storage container, after which Azure Synapse Analytics can be configured to load that data. Tableau can be configured to connect to Azure Synapse Analytics and use the Amperity output as a data source.

You may use the Azure Blob Storage container that comes with your Amperity tenant for the intermediate step (if your Amperity tenant is running on Azure). Or you may configure Amperity to send data to an Azure Blob Storage container that your organization manages directly.

To connect Tableau to Azure Synapse Analytics

The steps required to configure Amperity to send data that is accessible to Tableau from Azure Synapse Analytics requires completion of a series of short workflows, some of which must be done outside of Amperity.

Step 1.

Use a query return the data you want to make available to Tableau for use with data visualizations.

Step 2.

Send an Apache Parquet, Apache Avro, CSV, or JSON file to Azure Blob Storage from Amperity.

Step 3.

Load data from Azure Blob Storage to Azure Synapse Analytics.

Step 4.

Connect Tableau to Azure Synapse Analytics , and then access the data sent from Amperity.

Step 5.

Validate the workflow within Amperity and the data within Tableau.

Step 6.

Configure Amperity to automate this workflow for a regular (daily) refresh of data.

Connect to Databricks

Databricks provides a unified platform for data and AI that supports large-scale processing for batch and streaming workloads, standardized machine learning lifecycles, and accelerated data science workflows for large datasets.

What is Delta table?

A Delta table is a table in a Delta Lake, which is an optimized storage layer that provides the foundation for storing data and tables in the Databricks Lakehouse Platform. Delta Lake is the default storage format for all operations on Databricks. Unless otherwise specified, all tables on Databricks are Delta tables.

Some organizations choose to store their visualization source data in Databricks, and then connect to Databricks from Tableau.

You may send an Apache Parquet, Apache Avro, CSV, or JSON file from Amperity to Databricks, and then connect to that data from Tableau.

To connect Tableau to Databricks

The steps required to configure Amperity to send data that is accessible to Tableau from Databricks requires completion of a series of short workflows, some of which must be done outside of Amperity.

Step 1.

Use a query return the data you want to make available to Tableau for use with data visualizations.

Step 2.

Send an Apache Parquet, Apache Avro, CSV, or JSON file to Databricks as a Delta table from Amperity.

Step 3.

Validate the workflow within Amperity and the data within Databricks.

Step 4.

Connect Tableau to Databricks , and then access the data sent from Amperity.

Step 5.

Configure Amperity to automate this workflow for a regular (daily) refresh of data.

Connect to Google BigQuery

Google BigQuery is a fully-managed data warehouse that provides scalable, cost-effective, serverless software that can perform fast analysis over petabytes of data and querying using ANSI SQL.

Amperity can be configured to send data to Google Cloud Storage, after which the data can be transferred to Google BigQuery. Tableau can be configured to connect to Google BigQuery and use the Amperity output as a data source.

You must configure Amperity to send data to a Google Cloud Storage bucket that your organization manages directly.

To connect Tableau to Google BigQuery

The steps required to configure Amperity to send data that is accessible to Tableau from Google BigQuery requires completion of a series of short workflows, some of which must be done outside of Amperity.

Step 1.

Use a query return the data you want to make available to Tableau for use with data visualizations.

Step 2.

Send an Apache Parquet, Apache Avro, CSV, or JSON file to Google Cloud Storage from Amperity.

Step 3.

Transfer data from Cloud Storage to Google BigQuery.

Step 4.

Connect Tableau to Google BigQuery , and then access the data sent from Amperity.

Step 5.

Validate the workflow within Amperity and the data within Tableau.

Step 6.

Configure Amperity to automate this workflow for a regular (daily) refresh of data.

Connect to Snowflake

Amperity can be configured to send data directly to Snowflake. Tableau can be configured to connect to Snowflake, and use Amperity as a source for data visualizations.

To connect Tableau to Snowflake

The steps required to configure Amperity to send data that is accessible to Tableau from a Snowflake data warehouse requires completion of a series of short workflows, some of which must be done outside of Amperity.

  1. Configure Snowflake objects for the correct database, tables, roles, and users.

  2. Send data to Snowflake from Amperity.

  3. Connect Tableau to Snowflake , and then access the data sent from Amperity.

    Note

    The URL for the Snowflake data warehouse, the Snowflake username, the password, and the name of the Snowflake data warehouse are sent to the Tableau user within a SnapPass link. Request this information from your Amperity representative prior to attempting to connect Tableau to Snowflake.

  4. Validate the workflow within Amperity and the data within Tableau.

  5. Configure Amperity to automate this workflow for a regular (daily) refresh of data.

Note

Snowflake can be configured to run in Amazon AWS or Azure. When using the Amazon Data Warehouse you will use the same cloud platform as your Amperity tenant. When using your own instance of Snowflake, you should use the same Amazon S3 bucket or Azure Blob Storage container that is included with your tenant when configuring Snowflake for data sharing, but then connect Tableau directly to your own instance of Snowflake.