The purpose of exploratory analysis is to understand your data and any idiosyncrasies which may be relevant to the task of data linking.
Splink includes functionality to visualise and summarise your data, to identify characteristics most salient to data linking.
In this notebook we perform some basic exploratory analysis, and interpret the results.
Read in the data¶
For the purpose of this tutorial we will use a 1,000 row synthetic dataset that contains duplicates.
The first five rows of this dataset are printed below.
Note that the cluster column represents the 'ground truth' - a column which tells us with which rows refer to the same person. In most real linkage scenarios, we wouldn't have this column (this is what Splink is trying to estimate.)
import pandas as pd import altair as alt alt.renderers.enable('mimetype') df = pd.read_csv("./data/fake_1000.csv") df.head(5)
Instantiate the linker¶
Most of Splink's core functionality can be accessed as methods on a linker object. For example, to make predictions, you would call
We therefore begin by instantiating the linker, passing in the data we wish to deduplicate.
# Initialise the linker, passing in the input dataset(s) from splink.duckdb.duckdb_linker import DuckDBLinker linker = DuckDBLinker(df)
It's important to understand the level of missingness in your data, because columns with higher levels of missingness are less useful for data linking.
The above summary chart shows that in this dataset, the
forename columns contain nulls, but the level of missingness is relatively low (less than 22%).
The distribution of values in your data is important for two main reasons:
Columns with higher cardinality (number of distinct values) are usually more useful for data linking. For instance, date of birth is a much stronger linkage variable than gender.
The skew of values is important. If you have a
citycolumn that has 1,000 distinct values, but 75% of them are
London, this is much less useful for linkage than if the 1,000 values were equally distributed
linker.profile_columns() method creates summary charts to help you understand these aspects of your data.
You may input column names (e.g.
first_name), or arbitrary sql expressions like
linker.profile_columns(["first_name", "city", "surname", "email", "substr(dob, 1,4)"], top_n=10, bottom_n=5)
This chart is very information-dense, but here are some key takehomes relevant to our linkage:
There is strong skew in the
cityfield with around 20% of the values being
London. We therefore will probably want to use
term_frequency_adjustmentsin our linkage model, so that it can weight a match on London differently to a match on, say,
Looking at the "Bottom 5 values by value count", we can see typos in the data in most fields. This tells us this information was possibly entered by hand, or using Optical Character Recognition, giving us an insight into the type of data entry errors we may see.
Email is a much more uniquely-identifying field than any others, with a maximum value count of 6. It's likely to be a strong linking variable.
At this point, we have begin to develop a strong understanding of our data. It's time to move on to estimating a linkage model