Pull from Databricks

Databricks provides a unified platform for data and AI that supports large-scale processing for batch and streaming workloads, standardized machine learning lifecycles, and accelerated data science workflows for large datasets.

Use this connector to pull database tables from Databricks to Amperity.

This topic describes the steps that are required to pull data tables to Amperity from Databricks:

  1. Cluster requirements

  2. Get details

  3. Add courier

  4. Get sample files

  5. Add feeds

  6. Add load operations

  7. Run courier

  8. Add to courier group

Cluster requirements

The Databricks data source recommends the following cluster configuration:

  1. Databricks Runtime Version: 11.3 LTS (Spark 3.3.0)

  2. Can perform Python and SQL functions

Important

Cluster must be able to perform Python functions like spark.conf.set(‘fs.s3a.aws.credentials.provider’, ‘org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider’):

Get details

The Databricks data source requires the following configuration details:

  1. The Server Hostname for your Databricks data warehouse. For example: “acme.cloud.databricks.com”.

  2. The HTTP Path to your Databricks data warehouse. For example: “sql/protocolv1/o/1abc2d3456e7f890a/abcd-1234-wxyz-6789”.

  3. A Personal Access Token to allow access to Databricks. A personal access token is a generated string. For example: “dapi1234567890b2cd34ef5a67bc8de90fa12b”.

    Important

    Databricks recommends using a personal access token that belongs to service principals instead of workspace users.

  4. Optional. Storage Credentials for your cloud storage container/bucket.

    Tip

    Using the databricks credential type allows you to use an Amperity-owned storage bucket.

    Caution

    The databricks credential type is recommended for ingests that take less than 1 hour. If you anticipate your data will take longer than 1 hour, please contact Amperity support.

You can find your Databricks connections details from the Databricks workspace.

Tip

Use SnapPass to securely share configuration details for Databricks between your company and your Amperity representative.

Add courier

A courier brings data from an external system to Amperity.

Tip

You can run a courier with an empty load operation using {} as the value for the load operation. Use this approach to get files to upload during feed creation, as a feed requires knowing the schema of a file before you can apply semantic tagging and other feed configuration settings.

To add a courier

  1. From the Sources page, click Add Courier. The Add Source page opens.

  2. Find, and then click the icon for Databricks. The Add Courier page opens.

    This automatically selects databricks as the Credential Type.

  3. Enter the name of the courier. For example: “Databricks”.

  4. From the Credential drop-down, select Create a new credential. This opens the Create New Credential page.

  5. Enter a name for the credential, the server hostname, HTTP path, and personal access token. Click Save.

  6. Under Databricks Settings, add an optional folder prefix to output files in your storage bucket and a square-bracketed list of fully qualified table names.

    [
      "catalog_1.schema_1.table_1",
      "catalog_2.schema_2.table_2",
      "catalog_3.schema_3.table_3"
    ]
    
  7. To pull a table using a query, click “Add Databricks query”. In the expanded box, provide a unique query name. A query name may contain alphanumeric characters (A-Z, a-z), underscores, hyphens, and/or periods. For example: “Query_name.12-345a”.

    Use Databricks SQL to build a query to run against a table that is to be pulled to Amperity.

    Important

    The name of the query must be added to the file parameter within the load operations. For example:

    "FEED_ID": [
      {
        "type": "load",
        "file": "Query_name.12-345a"
      }
    ]
    
  8. Configure the load operations to have the correct feed ID, operation, and file name. (The file name is the name of the table in Databricks.)

  9. Click Save.

Get sample files

Every Databricks file that is pulled to Amperity must be configured as a feed. Before you can configure each feed you need to know the schema of that file. Run the courier without load operations to bring sample files from Databricks to Amperity, and then use each of those files to configure a feed.

To get sample files

  1. From the Sources tab, open the menu for a courier configured for Databricks with empty load operations, and then select Run. The Run Courier dialog box opens.

  2. Select Load data from a specific day, and then select today’s date.

  3. Click Run.

    Important

    The courier run will fail, but this process will successfully return a list of files from Databricks.

    These files will be available for selection as an existing source from the Add Feed dialog box.

  4. Wait for the notification for this courier run to return an error similar to:

    Error running load-operations task
    Cannot find required feeds: "df-xxxxxx"
    

Add feeds

A feed defines how data should be loaded into a domain table, including specifying which columns are required and which columns should be associated with a semantic tag that indicates that column contains customer profile (PII) and transactions data.

Note

A feed must be added for each table that is pulled from Databricks.

To add a feed

  1. From the Sources tab, click Add Feed. This opens the Add Feed dialog box.

  2. Under Data Source, select Create new source, and then enter “Databricks”.

  3. Enter the name of the feed in Feed Name. For example: “DatabricksTable”.

    Tip

    The name of the domain table will be “<data-source-name>:<feed-name>”. For example: “Databricks:DatabricksTable”.

  4. Under Sample File, select Select existing file, and then choose from the list of files. For example: “filename_YYYY-MM-DD.csv”.

    Tip

    The list of files that is available from this drop-down menu is sorted from newest to oldest.

  5. Select Load sample file on feed activation.

  6. Click Continue. This opens the Feed Editor page.

  7. Select the primary key.

  8. Apply semantic tags to customer records and interaction records, as appropriate.

  9. Under Last updated field, specify which field best describes when records in the table were last updated.

    Tip

    Choose Generate an “updated” field to have Amperity generate this field. This is the recommended option unless there is a field already in the table that reliably provides this data.

  10. For feeds with customer records (PII data), select Make available to Stitch.

  11. Click Activate. Wait for the feed to finish loading data to the domain table, and then review the sample data for that domain table from the Data Explorer.

Add load operations

After the feeds are activated and domain tables are available, add the load operations to the courier used for Databricks.

Example load operations

Load operations must specify each file that will be pulled to Amperity from Databricks.

For example:

{
  "DATABRICKS-TABLE-FEED-ID": [
    {
      "type": "truncate"
    },
    {
      "type": "load",
      "file": "catalog_name.schema_name.table_name"
    }
  ]
}

To add load operations

  1. From the Sources tab, open the menu for the courier that was configured for Databricks, and then select Edit. The Edit Courier dialog box opens.

  2. Edit the load operations for each of the feeds that were configured for Databricks so they have the correct feed ID.

  3. Click Save.

Run courier manually

Run the courier again. This time, because the load operations are present and the feeds are configured, the courier will pull data from Databricks.

To run the courier manually

  1. From the Sources tab, open the    menu for the courier with updated load operations that is configured for Databricks, and then select Run. The Run Courier dialog box opens.

  2. Select the load option, either for a specific time period or all available data. Actual data will be loaded to a domain table because the feed is configured.

  3. Click Run.

    This time the notification will return a message similar to:

    Completed in 5 minutes 12 seconds
    

Add to courier group

A courier group is a list of one (or more) couriers that are run as a group, either ad hoc or as part of an automated schedule. A courier group can be configured to act as a constraint on downstream workflows.

To add the courier to a courier group

  1. From the Sources tab, click Add Courier Group. This opens the Create Courier Group dialog box.

  2. Enter the name of the courier. For example: “Databricks”.

  3. Add a cron string to the Schedule field to define a schedule for the orchestration group.

    A schedule defines the frequency at which a courier group runs. All couriers in the same courier group run as a unit and all tasks must complete before a downstream process can be started. The schedule is defined using cron.

    Cron syntax specifies the fixed time, date, or interval at which cron will run. Each line represents a job, and is defined like this:

    ┌───────── minute (0 - 59)
    │ ┌─────────── hour (0 - 23)
    │ │ ┌───────────── day of the month (1 - 31)
    │ │ │ ┌────────────── month (1 - 12)
    │ │ │ │ ┌─────────────── day of the week (0 - 6) (Sunday to Saturday)
    │ │ │ │ │
    │ │ │ │ │
    │ │ │ │ │
    * * * * * command to execute
    

    For example, 30 8 * * * represents “run at 8:30 AM every day” and 30 8 * * 0 represents “run at 8:30 AM every Sunday”. Amperity validates your cron syntax and shows you the results. You may also use crontab guru to validate cron syntax.

  4. Set Status to Enabled.

  5. Specify a time zone.

    A courier group schedule is associated with a time zone. The time zone determines the point at which a courier group’s scheduled start time begins. A time zone should be aligned with the time zone of system from which the data is being pulled.

    Note

    The time zone that is chosen for an courier group schedule should consider every downstream business processes that requires the data and also the time zone(s) in which the consumers of that data will operate.

  6. Add at least one courier to the courier group. Select the name of the courier from the Courier drop-down. Click + Add Courier to add more couriers.

  7. Click Add a courier group constraint, and then select a courier group from the drop-down list.

    A wait time is a constraint placed on a courier group that defines an extended time window for data to be made available at the source location.

    A courier group typically runs on an automated schedule that expects customer data to be available at the source location within a defined time window. However, in some cases, the customer data may be delayed and isn’t made available within that time window.

  8. For each courier group constraint, apply any offsets.

    An offset is a constraint placed on a courier group that defines a range of time that is older than the scheduled time, within which a courier group will accept customer data as valid for the current job. Offset times are in UTC.

    A courier group offset is typically set to be 24 hours. For example, it’s possible for customer data to be generated with a correct file name and datestamp appended to it, but for that datestamp to represent the previous day because of the customer’s own workflow. An offset ensures that the data at the source location is recognized by the courier as the correct data source.

    Warning

    An offset affects couriers in a courier group whether or not they run on a schedule. Manually run courier groups will not take their schedule into consideration when determining the date range; only the provided input day(s) to load data from are used as inputs.

  9. Click Save.