Configure destination for Throtle

Throtle provides brands and marketers a complete view of their customers and accurate targeting across all devices and channels. Target customers using connected TVs, cookieless identities, and mobile advertising IDs (MAIDs), such as the Identifier for Advertising (IDFA) from Apple and the Google Advertising ID (GAID) from Google.

Get details

Amperity can be configured to send data to a customer-managed Amazon S3 bucket using cross-account roles, and then connect Throtle to that Amazon S3 bucket.

Configure cross-account roles

Amperity prefers to pull data from and send data to customer-managed cloud storage.

Amperity requires using cross-account role assumption to manage access to Amazon S3 to ensure that customer-managed security policies control access to data.

This approach ensures that customers can:

  • Directly manage the IAM policies that control access to data

  • Directly manage the files that are available within the Amazon S3 bucket

  • Modify access without requiring involvement by Amperity; access may be revoked at any time by either Amazon AWS account, after which data sharing ends immediately

  • Directly troubleshoot incomplete or missing files

Note

After setting up cross-account role assumption, a list of files (by filename and file type), along with any sample files, must be made available to allow for feed creation. These files may be placed directly into the shared location after cross-account role assumption is configured.

Can I use an Amazon AWS Access Point?

Yes, but with the following limitations:

  1. The direction of access is Amperity access files that are located in a customer-managed Amazon S3 bucket

  2. A credential-free role-to-role access pattern is used

  3. Traffic is not restricted to VPC-only

To configure an S3 bucket for cross-account role assumption

The following steps describe how to configure Amperity to use cross-account role assumption to pull data from (or push data to) a customer-managed Amazon S3 bucket.

Important

These steps require configuration changes to customer-managed Amazon AWS accounts and must be done by users with administrative access.

Step 1.

Open the Destinations tab to configure credentials for Throtle.

Click the Add destination button to open the Add destination dialog box.

Name, description, choose plugin.

Select Throtle from the Plugin dropdown.

Step 1.

From the Credentials dialog box, enter a name for the credential, select the iam-role-to-role credential type, and then select “Create new credential”.

Select the iam-role-to-role credential type.
Step 2.

Next configure the settings that are specific to cross-account role assumption.

Name, description, choose plugin.

The values for the Amperity Role ARN and External ID fields – the Amazon Resource Name (ARN) for your Amperity tenant and its external ID – are provided automatically.

You must provide the values for the Target Role ARN and S3 Bucket Name fields. Enter the target role ARN for the IAM role that Amperity will use to access the customer-managed Amazon S3 bucket, and then enter the name of the Amazon S3 bucket.

Step 3.

Review the following sample policy, and then add a similar policy to the customer-managed Amazon S3 bucket that allows Amperity access to the bucket. Add this policy as a trusted policy to the IAM role that is used to manage access to the customer-managed Amazon S3 bucket.

The policy for the customer-managed Amazon S3 bucket is unique, but will be similar to:

{
  "Statement": [
    {
      "Sid": "AllowAmperityAccess",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::account:role/resource"
       },
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringEquals": {
           "sts:ExternalId": "01234567890123456789"
        }
      }
    }
  ]
}

The value for the role ARN is similar to:

arn:aws:iam::123456789012:role/prod/amperity-plugin

An external ID is an alphanumeric string between 2-1224 characters (without spaces) and may include the following symbols: plus (+), equal (=), comma (,), period (.), at (@), colon (:), forward slash (/), and hyphen (-).

Step 4.

Click Continue to test the configuration (and validate the connection) to the customer-managed Amazon S3 bucket, after which you will be able to continue the steps for adding a courier.

Add destination

Use a sandbox to configure a destination for Throtle. Before promoting your changes, send a test audience, and then verify the results in Throtle. After verifying the end-to-end workflow, push the destination from the sandbox to production.

To add a destination for Throtle

Step 1.

Open the Destinations page, and then click the Add destination button.

Add

To configure a destination for Throtle, do one of the following:

  1. Click the row in which Throtle is located. Destinations list alphabetically and you can scroll up and down the list.

  2. Search for Throtle. Start typing “amaz”. The list filters to show only matching destinations. Select “Amazon S3”.

Step 2.

Select the credential for Throtle from the Credential dropdown, and then click Continue.

Tip

Click the “Test connection” link on the “Configure destination” page to verify that Amperity can connect to Throtle.

Step 3.

In the “Destination settings” dialog box, assign the destination a name and description that ensures other users of Amperity can recognize when to use this destination.

Configure business user access

By default a destination is available to all users who have permission to view personally identifiable information (PII).

Enable the Admin only checkbox to restrict access to only users assigned to the Datagrid Operator and Datagrid Administrator policies.

Enable the PII setting checkbox to allow limited access to PII for this destination.

Use the Restrict PII access policy option to prevent users from viewing data marked as PII anywhere in Amperity and from sending data to downstream workflows.

Step 4.

Configure the following settings, and then click “Save”.

Compression

The compression format to apply to the file. May be one of “GZIP”, “None”, “TAR”, “TGZ”, or “ZIP”.

Escape character

The escape character to use in the file output. Applies to CSV, TSV, PSV, and custom delimiter file types.

When an escape character is not specified and the quote mode is “None” files are sent with unescaped and unquoted data. When an escape character is not specified, you should select a non-“None” option as the quote mode.

File format

Required

Configure Amperity to send CSV, TSV, or PSV files to an Amazon S3 bucket.

Some file formats allow a custom delimiter. Choose the “Custom delimiter” file format, and then add a single character to represent the custom delimiter.

Apache Parquet files only

The extension for Apache Parquet files may be excluded from the directory name.

Filename template

A filename template defines the naming pattern for files that sent from Amperity. Specify the name of the file, and then use Jinja-style string formatting to append a date or timestamp to the filename.

Header

Enable to include header rows in output files.

PGP public key

The PGP public key that Amperity uses to encrypt files.

Quote mode

The quote mode to use within the file. May be one of “all fields”, “all non-NULL fields”, “fields with special characters only”, “all non-numeric fields” or “None”.

Unescaped, unquoted files may occur when quote mode is “None” and an escape character is not specified.

S3 prefix

Required. The S3 prefix is a string used to filter results to include only objects whose names begin with this prefix. When set, this value returns a list of object names relative to the root of the bucket.

Success file

Enable to send a “.DONE” file when Amperity has finished sending data.

If a downstream sensor is listening for files sent from Amperity, configure that sensor to listen for the presence of the “.DONE” file.

Split outputs

Split delimiter-separated output–CSV, PSV, TSV, or files with custom delimiters–into multiple files to ensure downstream file limits are not exceeded.

Choose “Rows” and set “Rows limit” to a value between “50000” and “10000000”. This is the maximum number of rows for split output files.

Choose “Megabytes” and set “Megabytes limit” to a value between “1 MB” and “2000 MB”. This is the maximum file size.

Additional configuration is required for filename templates.

Set the value of “Split filename template” to “{{file_number}}.csv” to apply a unique seven digit left-padded integer to the filename. For example: “0000001.csv”, “0000002.csv”, and “0000003.csv”.

Use the “Split file directory template” to name the directory into which split files are added.

For example: if the value of “Split file directory template” is {{now|format:’YYYY’}}.tgz and the value of “Split filename template” is “{{file_number}}.csv” Amperity will output a gzipped tarball named “2025.tgz” with subfiles named “0000001.csv”, “0000002.csv”, and “0000003.csv”.

Use Zip64?

Enable to apply Zip64 data compression to large files.

Row Number

Select to include a row number column in the output file. Applies to CSV, TSV, PSV, and custom delimiter file types.

Use the Column name setting to specify the name of the row number column in the output file. The name of this column must have fewer than 1028 characters and may only contain numbers, letters, underscores, and hyphens. Default value: “row_number”.

Step 5.

After configuring this destination users may use:

  • Orchestrations to send query results

  • Orchestrations and campaigns to send audiences

  • Orchestrations and campaigns to send offline events