Synthetic Data Vault (SDV)

The Synthetic Data Vault (SDV) is a Synthetic Data Generation ecosystem of libraries that allows users to easily learn single-table, multi-table and timeseries datasets to later on generate new Synthetic Data that has the same format and statistical properties as the original dataset.

Synthetic data can then be used to supplement, augment and in some cases replace real data when training Machine Learning models. Additionally, it enables the testing of Machine Learning or other data dependent software systems without the risk of exposure that comes with data disclosure.

Underneath the hood it uses several probabilistic graphical modeling and deep learning based techniques. To enable a variety of data storage structures, we employ unique hierarchical generative modeling and recursive sampling techniques.

Install

Requirements

SDV has been developed and tested on Python 3.6, 3.7 and 3.8

Also, although it is not strictly required, the usage of a
virtualenv is highly recommended in order to avoid
interfering with other software installed in the system where SDV is run.

Install with pip

The easiest and recommended way to install SDV is using pip:

pip install sdv

This will pull and install the latest stable release from PyPi.

If you want to install from source or contribute to the project please read the
Contributing Guide.

Quickstart

In this short tutorial we will guide you through a series of steps that will help you
getting started using SDV.

1. Model the dataset using SDV

To model a multi table, relational dataset, we follow two steps. In the first step, we will load
the data and configures the meta data. In the second step, we will use the sdv API to fit and
save a hierarchical model. We will cover these two steps in this section using an example dataset.

Step 1: Load example data

SDV comes with a toy dataset to play with, which can be loaded using the sdv.load_demo
function:

from sdv import load_demo

metadata, tables = load_demo(metadata=True)

This will return two objects:

  1. A Metadata object with all the information that SDV needs to know about the dataset.

For more details about how to build the Metadata for your own dataset, please refer to the
Working with Metadata
tutorial.

  1. A dictionary containing three pandas.DataFrames with the tables described in the
    metadata object.

The returned objects contain the following information:

{
    'users':
            user_id country gender  age
          0        0     USA      M   34
          1        1      UK      F   23
          2        2      ES   None   44
          3        3      UK      M   22
          4        4     USA      F   54
          5        5      DE      M   57
          6        6      BG      F   45
          7        7      ES   None   41
          8        8      FR      F   23
          9        9      UK   None   30,
  'sessions':
          session_id  user_id  device       os
          0           0        0  mobile  android
          1           1        1  tablet      ios
          2           2        1  tablet  android
          3           3        2  mobile  android
          4           4        4  mobile      ios
          5           5        5  mobile  android
          6           6        6  mobile      ios
          7           7        6  tablet      ios
          8           8        6  mobile      ios
          9           9        8  tablet      ios,
  'transactions':
          transaction_id  session_id           timestamp  amount  approved
          0               0           0 2019-01-01 12:34:32   100.0      True
          1               1           0 2019-01-01 12:42:21    55.3      True
          2               2           1 2019-01-07 17:23:11    79.5      True
          3               3           3 2019-01-10 11:08:57   112.1     False
          4               4           5 2019-01-10 21:54:08   110.0     False
          5               5           5 2019-01-11 11:21:20    76.3      True
          6               6           7 2019-01-22 14:44:10    89.5      True
          7               7           8 2019-01-23 10:14:09   132.1     False
          8               8           9 2019-01-27 16:09:17    68.0      True
          9               9           9 2019-01-29 12:10:48    99.9      True
}

2. Fit a model using the SDV API.

First, we build a hierarchical statistical model of the data using SDV. For this we will
create an instance of the sdv.SDV class and use its fit method.

During this process, SDV will traverse across all the tables in your dataset following the
primary key-foreign key relationships and learn the probability distributions of the values in
the columns.

from sdv import SDV

sdv = SDV()
sdv.fit(metadata, tables)

Once the modeling has finished, you can save your fitted SDV instance for later usage
using the save method of your instance.

sdv.save('sdv.pkl')

The generated pkl file will not include any of the original data in it, so it can be
safely sent to where the synthetic data will be generated without any privacy concerns.

2. Sample data from the fitted model

In order to sample data from the fitted model, we will first need to load it from its
pkl file. Note that you can skip this step if you are running all the steps sequentially
within the same python session.

sdv = SDV.load('sdv.pkl')

After loading the instance, we can sample synthetic data by calling its sample method.

samples = sdv.sample()

The output will be a dictionary with the same structure as the original tables dict,
but filled with synthetic data instead of the real one.

Finally, if you want to evaluate how similar the sampled tables are to the real data,
please have a look at our evaluation framework or visit the SDMetrics library.

GitHub