Start here

Welcome to Amperity reference documentation! This collection of topics is about Amperity, ordered from A to Z.

Topics

This set of topics covers the following components of Amperity and is organized alphabetically:

Topic

Description

Campaigns

Use the Campaigns page to configure one-time or recurring campaigns. Use a combination of audiences, exclusion lists, control and treatment groups, and sub-audiences to build any type of campaign.

Courier groups

A courier group is a list of one (or more) couriers that are run as a group. Use courier groups to define ad hoc and automatic schedules that bring data to Amperity.

Couriers

A courier brings data from external system to Amperity. A courier relies on a feed to know which fileset to bring to Amperity for processing.

Credentials

The Credentials page provides a consolidated view for all credentials used to provide access to sources and destinations in your tenant.

Data exports

A database may be configured to export one (or more) tables or an entire database from Amperity.

Data tables

A customer 360 database is built using standard outputs of the Stitch process.

Data templates

A data template defines how columns in Amperity data structures are sent to downstream workflows.

Databases

The Customer 360 page enables the use of Spark SQL to build any database or table based on your raw data combined with the standard outputs of the Stitch process, which provide the stable linking key known as the Amperity ID.

Domain tables

Domain tables that contain customer records are made available to the Stitch process to identify unique individuals and assign them Amperity IDs.

Domain tables that contain interaction records are used to create attributes that are associated with the unique individuals who have been assigned Amperity IDs.

Feeds

A feed defines how data should be loaded to Amperity. Use the Feed Editor to standardize field types and apply semantic tags to all of the incoming fields that contain customer and/or interaction records.

File formats

Amperity supports Apache Avro, Apache Parquet, CBOR, CSV, JSON, NDJSON, PSV, streaming JSON, TSV, and XML file formats.

Ingest queries

An ingest query is a SQL statement that may be applied to data prior to loading it to Amperity.

Orchestration groups

An orchestration group defines the schedule that is used to send data from Amperity.

Orchestrations

An orchestration defines the relationship between query results and a destination.

Policies

A policy represents a set of actions that are available to a user when that policy is assigned to them. All actions within Amperity are controlled by a policy.

Queries

Build queries against your databases using Presto SQL, and then use orchestrations to send those results to any downstream system or workflow.

Recent activity

Use recent activity to learn more about the state of workflows that are currently running in your tenant.

Sandboxes

Use sandboxes to safely make configuration changes to your production tenant.

Segments

Use segments and segment insights to build audiences, and then assign those audiences to campaigns.

Semantic tags

A semantic tag standardizes profile (PII), transaction, and other important customer details across all columns in all data tables.

Single sign-on (SSO)

Amperity supports the use of single sign-on (SSO) to manage the users who can access your tenant. Learn more about how it works with Amperity, and then request to enable SSO for your tenant.

SQL – Presto SQL

Presto is a distributed SQL query engine that is designed to efficiently query vast amounts of data using distributed queries. Presto is used by the Amperity SQL segment editors to define segments, which are SQL queries that return data from stitched data tables.

Use the Amperity Presto SQL reference to learn more about how you can use Presto SQL to build queries and segments that return data from your customer 360 database.

SQL – Spark SQL

Spark SQL is a high performance SQL query engine that is used by Amperity to ingest data, create domain tables, and extend the outcome of the Stitch process in your customer 360 database.

Use the Amperity Spark SQL reference to learn more about how you can use Spark SQL to build ingest queries, custom domain tables, and database tables. You may also refer to the official Spark SQL, version 3.1.2 documentation for more information about functions that are not covered by the Amperity Spark SQL reference.

Stitch

Stitch uses patented algorithms to evaluate massive volumes of data to discover the hidden connections in your customer records that identify unique individuals. Stitch outputs a unified collection of data that assigns a unique identifier to each unique individual that is discovered within your customer records.

Workflows

The Workflow page provides a view of all of your tenant workflows.