File format: Streaming JSON

Streaming JSON is a way to send increments of data using NDJSON formatting within each increment. Each line in a NDJSON file is a valid JSON value.

Pull streaming JSON files

To pull streaming JSON files to Amperity:

  1. Select a filedrop data source.

  2. Use an ingest query to select fields from the streaming JSON file to pull to Amperity.

  3. Configure a courier for the location and name of the streaming JSON file, and then for the name of an ingest query.

  4. Define a feed to associate the fields that were selected from the streaming JSON file with semantic tags for customer profiles and interactions, as necessary.

Data sources

Pull streaming JSON files to Amperity using any filedrop data source:

Ingest queries

An ingest query is a SQL statement that may be applied to data prior to loading it to a domain table. An ingest query is defined using Spark SQL syntax.

Use Spark SQL to define an ingest query for the streaming JSON file. Use a SELECT statement to specify which fields should be pulled to Amperity. Apply transforms to those fields as necessary.

Couriers

A courier brings data from an external system to Amperity.

A courier must specify the location of the streaming JSON file, and then define how that file is to be pulled to Amperity. This is done using a combination of configuration blocks:

  1. Load settings

  2. Load operations

Load settings

Use courier load settings to specify the path to the streaming JSON file, a file tag (which can be the same as the name of the streaming JSON file), and the "application/x-json-stream" content type.

{
  "object/type": "file",
  "object/file-pattern": "'path/to/file'-YYYY-MM-dd'.ndjson'",
  "object/land-as": {
     "file/header-rows": 1,
     "file/tag": "FILE_NAME",
     "file/content-type": "application/x-json-stream"
  }
},

Load operations

Use courier load operations to associate a feed ID to the courier, apply the same file tag as the one used for load settings, and the name of the ingest query.

{
  "FEED_ID": [
    {
      "type": "spark-sql",
      "spark-sql-files": [
        {
          "file": "FILE_NAME"
        }
      ],
      "spark-sql-query": "INGEST_QUERY_NAME"
    }
  ]
}

Feeds

A feed defines how data should be loaded into a domain table, including specifying which columns are required and which columns should be associated with a semantic tag that indicates that column contains customer profile (PII) and transactions data.

Apply profile (PII) semantics to customer records and transaction, and product catalog semantics to interaction records. Use blocking key (bk), foreign key (fk), and separation key (sk) semantic tags to define how Amperity should understand how field relationships should be understood when those values are present across your data sources.

Send streaming JSON files

Important

Amperity does not send streaming JSON files to downstream workflows.