Skip to content

Evaluation and Record Context

Tranquil Data applies a consistent evaluation model to API and export queries alike. Evaluation is done at the record-level, where Context for each record is used to choose the applicable policy from configuration. Each decision is audited, and the result may lead to redacting fields from the record. This section explains (at a high-level) how this process works, and what it means from in integration perspective.

Core Concepts#

Evaluation is trigged via the the redaction interfaces and via queries on an exported interfaces. In all cases, evaluation is run in the same way. For each record that is read or written, the associated context is retrieved or created (below) and provided as input to the policy engine. The engine evaluates the request against the context of the record and any associated users, walking the tree of policies to find the rule (if any) that applies. If some rule applies to the request, and the result is to permit the action, then the new context about this record is tracked in the Context Graph. An action may be permitted only after the set of Fields is first redacted down to those from the allowed Categories.

For each decision, an entry is written to the decision trace stream. This will include details like the allowed categories, the specific context used to resolve the decision, the specific rule that applied, and a one-line message explaining to an end-user the rationale. Also included is a unique identifier that is also included in the change data capture stream, so that as context forms it can be connected to the decisions that allowed it to form.

Exported Interfaces#

An export is a network endpoint, opened on a Tranquil Data process, that is defined with configuration properties to allow specific users access to policy-controlled data. Tranquil Data provides an API to export data paths. An export may provie direct access to data services, like a database or object store, or may act as a component in a pipeline or data transform.

In the Tranquil DataTM Trusted Flow Edition all exports are associated with the policy and model definitions driven by configuration. The enterprise engine supports the creation of additional "domains" that have their own policies, and each export may be associated with a specific domain.

Datastore Export#

When access to a datastore is exported through Tranquil Data, an endpoint is opened that supports the appropriate wire-protocol and/or REST API. In this role, a client that expects to connect to a given store, e.g. an application connecting to a Postgres server, will instead seamlessly connect to Tranquil Data. In turn, Tranquil Data establishes communication on the back-end with the actual service to read and write data in response to client operations.

As a transparent intermediary, Tranquil Data never changes content exchanged with the backing store unless explicitly asked to run in redaction mode. Similarly, Tranquil Data does not change the structure or schema of any datastore, nor does it store any metadata in those backing data services. From a security perspective, datastore export supports clients that should not have direct access to a data service, but need access to a specific sub-set of data for a specific purpose. For instance, a client could be given an OAuth token that only allows a specific Purpose from Configuration to be asserted, and therefore ensures that only fields from the allowed Categories for that Purpose are returned.

Each datastore defines its own definition for a record. In SQL databases, a record is a row at a given version. For document databases it is a document. For object stores it is the bucketing structure, possibly combined with content if the content is known to be of a supported form that itself has some usable structure.

Data-flow Exports#

Tranquil Data also supports exporting functionality that is associated with specific flows or transforms, instead of specific databases. These exports are used to validate policies or redact data based on the same policies, return caller requiements like data retention or expected notification, audit access and history, or surface tags, properties, and risk-quantifiers. These exported interfaces are designed to work in-parallel with any schema or function-based transforms, lineage trackers, or master-data processes. They make it simple to track incoming data against compliance and business processes, enforce those rules as data flows to warehouses or AI staging-services, and ensure that data is shared correctly with third-parties.

As with datastore exports, data-flow exports use common security and audit definitions. Because they are not supporting a specific database wire protocol, they take open-ended queries that may represent data to act on, or an operation to perform on another component of the pipeline.

Field Mapping#

The mapping interfaces tell Tranquil Data how to interpret record structure. They map from a specific structured format into the Configuration, so that different streams, flows, and services can all use the same defined rules. When an API query is made, or when a datastore is exported, a specific mapping group is named. That group contains a collection of definitions, each of which names a recordField and the associated Category and Field in Configuration that is mapped to. The value of recordField may be a flat string, as in a CSV field-name, or it may be hierarchical, as in a Postgres record, which uses the format [schema].tablename.columnname. Each datastore type defines its own field structure.

Mapping also defines which fields name resourceContext. If a field is tagged with this attribute, it means that the value of the field is the name used for a user and that users should be connected to this record in-context. As a result, it also means that the user's attributes and relationships will be available as the Configuration is evaluated. While a given mapping group may specify more than one of these fields, it's illegal to resolve more than one value for any given record.

Record Context#

When a record is first written to an exported datastore, record context is created that represents key metadata about that record. This includes when it was written, where the request came from, under which policies it was written, etc. The record's context may also include user-directed values like a tag, a score, or the value of a specific record field. The record is connected to a session that represents any grouping of operations (like a transaction), so that if multiple reads or writes happen together, that is also captured in-context. As that record is updated, the old and new versions are connected in-context.

That connection can be made for records that are said to have identity. A record has an identity when it's stored in a datastore that uniquely identifies data, and when the datastore is exported with the directive to track record context. In this case, the record context is written to the durable context store, and any subsequent reads ignore the content of the record and make policy decisions based solely on the record's context. This provides a view over time of the evolution of each record, and how that record was connected to other records and users.

A record may also be anonymous, meaning that it has no identity and does not maintain a connection to any other record in context. In the current version of the product, any record context created via the decision or redaction APIs is always anonymous. The context is created on-the-fly for each read, and dropped once the operation is complete (you may configure the product to include this record in the change data capture stream). A datastore may also be exported as anonymous, so that there's no context kept over time.