Connect Tableau to Databricks

Some organizations choose to store their visualization source data in Databricks, and then connect to Databricks from Tableau.

You may send an Apache Parquet, Apache Avro, CSV, or JSON file from Amperity to Databricks, and then connect to that data from Tableau.

What is Databricks?

Databricks provides a unified platform for data and AI that supports large-scale processing for batch and streaming workloads, standardized machine learning lifecycles, and accelerated data science workflows for large datasets.

A Delta table is a table in a Delta Lake, which is an optimized storage layer that provides the foundation for storing data and tables in the Databricks Lakehouse Platform. Delta Lake is the default storage format for all operations on Databricks. Unless otherwise specified, all tables on Databricks are Delta tables.

Add workflow

Amperity can be configured to send data to Databricks. Tableau can be configured to connect to Databricks, and then use Amperity as a source for data visualizations.

Connect Tableau to Databricks.

To connect Tableau to Databricks

The steps required to configure Amperity to send data that is accessible to Tableau from Databricks requires completion of a series of short workflows, some of which must be done outside of Amperity.

Step 1.

Use a query return the data you want to make available to Tableau for use with data visualizations.

Step 2.

Send an Apache Parquet, Apache Avro, CSV, or JSON file to Databricks as a Delta table from Amperity.

Step 3.

Validate the workflow within Amperity and the data within Databricks.

Step 4.

Connect Tableau to Databricks , and then access the data sent from Amperity.

Step 5.

Configure Amperity to automate this workflow for a regular (daily) refresh of data.