Specifying and estimating a linkage model¶
In the last tutorial we looked at how we can use blocking rules to generate pairwise record comparisons.
Now it's time to estimate a probabilistic linkage model to score each of these comparisons. The resultant match score is a prediction of whether the two records represent the same entity (e.g. are the same person).
The purpose of estimating the model is to learn the relative importance of different parts of your data for the purpose of data linking.
For example, a match on date of birth is a much stronger indicator that two records refer to the same entity than a match on gender. A mismatch on gender may be a stronger indicate against two records referring than a mismatch on name, since names are more likely to be entered differently.
The relative importance of different information is captured in the (partial) 'match weights', which can be learned from your data. These match weights are then added up to compute the overall match score.
The match weights are are derived from the m
and u
parameters of the underlying Fellegi Sunter model. Splink uses various statistical routines to estimate these parameters. Further details of the underlying theory can be found here, which will help you understand this part of the tutorial.
# Begin by reading in the tutorial data again
from splink.duckdb.duckdb_linker import DuckDBLinker
import pandas as pd
import altair as alt
alt.renderers.enable("mimetype")
df = pd.read_csv("./data/fake_1000.csv")
Specifying a linkage model¶
To build a linkage model, the user defines the partial match weights that splink
needs to estimate. This is done by defining how the information in the input records should be compared.
To be concrete, here is an example comparison:
first_name_l  first_name_r  surname_l  surname_r  dob_l  dob_r  city_l  city_r  email_l  email_r 

Robert  Rob  Allen  Allen  19710524  19710624  nan  London  roberta25@smith.net  roberta25@smith.net 
What functions should we use to assess the similarity of Rob
vs. Robert
in the the first_name
field?
Should similarity in the dob
field be computed in the same way, or a different way?
Your job as the developer of a linkage model is to decide what comparisons are most appropriate for the types of data you have.
Splink can then estimate how much weight to place on a fuzzy match of Rob
vs. Robert
, relative to an exact match on Robert
, or a nonmatch.
Defining these scenarios is done using Comparison
s.
Comparisons¶
The concept of a Comparison
has a specific definition within Splink: it defines how data from one or more input columns is compared, using SQL expressions to assess similarity.
For example, one Comparison
may represent how similarity is assessed for a person's date of birth.
Another Comparison
may represent the comparison of a person's name or location.
A model is composed of many Comparison
s, which between them assess the similarity of all of the columns being used for data linking.
Each Comparison
contains two or more ComparisonLevels
which define n discrete gradations of similarity between the input columns within the Comparison.
As such ComparisonLevels
are nested within Comparisons
as follows:
Data Linking Model
├─ Comparison: Date of birth
│ ├─ ComparisonLevel: Exact match
│ ├─ ComparisonLevel: One character difference
│ ├─ ComparisonLevel: All other
├─ Comparison: Surname
│ ├─ ComparisonLevel: Exact match on surname
│ ├─ ComparisonLevel: All other
│ etc.
Our example data would therefore result in the following comparisons, for dob
and surname
:
dob_l  dob_r  comparison_level  interpretation 

19710524  19710524  Exact match  great match 
19710524  19710624  One character difference  ok match 
19710524  20000102  All other  bad match 
surname_l  surname_r  comparison_level  interpretation 

Rob  Rob  Exact match  great match 
Rob  Jane  All other  bad match 
Rob  Robert  All other  bad match, this comparison has no notion of nicknames 
More information about comparisons can be found here.
We will now use these concepts to build a data linking model.
Specifying the model using comparisons¶
Splink includes libraries of comparison functions to make it simple to get started:
Let's start by looking at a Comparison
for first_name
:
import splink.duckdb.duckdb_comparison_library as cl
first_name_comparison = cl.levenshtein_at_thresholds("first_name", 2)
print(first_name_comparison.human_readable_description)
Specifying the full settings dictionary¶
Comparisons
are specified as part of the Splink settings
, a Python dictionary which controls all of the configuration of a Splink model:
settings = {
"link_type": "dedupe_only",
"comparisons": [
cl.exact_match("first_name"),
cl.levenshtein_at_thresholds("surname"),
cl.levenshtein_at_thresholds("dob", 1),
cl.exact_match("city", term_frequency_adjustments=True),
cl.levenshtein_at_thresholds("email"),
],
"blocking_rules_to_generate_predictions": [
"l.first_name = r.first_name",
"l.surname = r.surname",
],
"retain_matching_columns": True,
"retain_intermediate_calculation_columns": True,
}
linker = DuckDBLinker(df, settings)
In words, this setting dictionary says:
 We are performing a
dedupe_only
(the other options arelink_only
, orlink_and_dedupe
, which may be used if there are multiple input datasets).  When comparing records, we will use information from the
first_name
,surname
,dob
,city
andemail
columns to compute a match score.  The
blocking_rules_to_generate_predictions
states that we will only check for duplicates amongst records where either thefirst_name
orsurname
is identical.  We have enabled term frequency adjustments for the 'city' column, because some values (e.g.
London
) appear much more frequently than others.  We have set
retain_intermediate_calculation_columns
andadditional_columns_to_retain
toTrue
so that Splink outputs additional information that helps the user understand the calculations. If they wereFalse
, the computations would run faster.
Estimate the parameters of the model¶
Now that we have specified our linkage model, we need to estimate the probability_two_random_records_match
, u
, and m
parameters.

The
probability_two_random_records_match
parameter is the probability that two records taken at random from your input data represent a match (typically a very small number). 
The
u
values are the proportion of records falling into eachComparisonLevel
amongst truly nonmatching records. 
The
m
values are the proportion of records falling into eachComparisonLevel
amongst truly matching records
You can read more about the theory of what these mean.
We can estimate these parameters using unlabeled data. If we have labels, then we can estimate them even more accurately.
Estimation of probability_two_random_records_match
¶
In some cases, the probability_two_random_records_match
will be known. For example, if you are linking two tables of 10,000 records and expect a onetoone match, then you should set this value to 1/10_000
in your settings instead of estimating it.
More generally, this parameter is unknown and needs to be estimated.
It can be estimated accurately enough for most purposes by combining a series of deterministic matching rules and a guess of the recall corresponding to those rules. For further details of the rationale behind this appraoch see here.
In this example, I guess that the following deterministic matching rules have a recall of about 70%:
deterministic_rules = [
"l.first_name = r.first_name and levenshtein(r.dob, l.dob) <= 1",
"l.surname = r.surname and levenshtein(r.dob, l.dob) <= 1",
"l.first_name = r.first_name and levenshtein(r.surname, l.surname) <= 2",
"l.email = r.email"
]
linker.estimate_probability_two_random_records_match(deterministic_rules, recall=0.7)
Estimation of u
probabilities¶
Once we have the probability_two_random_records_match
parameter, we can estimate the u
probabilities.
We estimate u
using the estimate_u_using_random_sampling
method, which doesn't require any labels.
It works by sampling random pairs of records, since most of these pairs are going to be nonmatches. Over these nonmatches we compute the distribution of ComparisonLevel
s for each Comparison
.
For instance, for gender
, we would find that the the gender matches 50% of the time, and mismatches 50% of the time.
For dob
on the other hand, we would find that the dob
matches 1% of the time, has a "one character difference" 3% of the time, and everything else happens 96% of the time.
The larger the random sample, the more accurate the predictions. You control this using the target_rows
parameter. For large datasets, we recommend using at least 10 million  but the higher the better and 1 billion is often appropriate for larger datasets.
linker.estimate_u_using_random_sampling(target_rows=1e6)
Estimation of m
probabilities¶
m
is the trickiest of the parameters to estimate, because we have to have some idea of what the true matches are.
If we have labels, we can directly estimate it.
If we do not have labelled data, the m
parameters can be estimated using an iterative maximum likelihood approach called Expectation Maximisation.
Estimating directly¶
If we have labels, we can estimate m
directly using the estimate_m_from_label_column
method of the linker.
For example, if the entity being matched is persons, and your input dataset(s) contain social security number, this could be used to estimate the m values for the model.
Note that this column does not need to be fully populated. A common case is where a unique identifier such as social security number is only partially populated.
For example (in this tutorial we don't have labels, so we're not actually going to use this):
linker.estimate_m_from_label_column("social_security_number")
Estimating with Expectation Maximisation¶
This algorithm estimates the m
values by generating pairwise record comparisons, and using them to maximise a likelihood function.
Each estimation pass requires the user to configure an estimation blocking rule to reduce the number of record comparisons generated to a manageable level.
In our first estimation pass, we block on first_name
and surname
, meaning we will generate all record comparisons that have first_name
and surname
exactly equal.
Recall we are trying to estimate the m
values of the model, i.e. proportion of records falling into each ComparisonLevel
amongst truly matching records.
This means that, in this training session, we cannot estimate parameter estimates for the first_name
or surname
columns, since they will be equal for all the comparisons we do.
We can, however, estimate parameter estimates for all of the other columns. The output messages produced by Splink confirm this.
training_blocking_rule = "l.first_name = r.first_name and l.surname = r.surname"
training_session_fname_sname = linker.estimate_parameters_using_expectation_maximisation(training_blocking_rule)
In a second estimation pass, we block on dob. This allows us to estimate parameters for the first_name
and surname
comparisons.
Between the two estimation passes, we now have parameter estimates for all comparisons.
from numpy import fix
training_blocking_rule = "l.dob = r.dob"
training_session_dob = linker.estimate_parameters_using_expectation_maximisation(training_blocking_rule)
Note that Splink includes other algorithms for estimating m and u values, which are documented here.
linker.match_weights_chart()
linker.m_u_parameters_chart()
Saving the model¶
We can save the model, including our estimated parameters, to a .json
file, so we can use it in the next tutorial.
settings = linker.save_settings_to_json("./demo_settings/saved_model_from_demo.json", overwrite=True)
Detecting unlinkable records¶
An interesting application of our trained model that is useful to explore before making any predictions is to detect 'unlinkable' records.
Unlinkable records are those which do not contain enough information to be linked. A simple example would be a record containing only 'John Smith', and null in all other fields. This record may link to other records, but we'll never know because there's not enough information to disambiguate any potential links. Unlinkable records can be found by linking records to themselves  if, even when matched to themselves, they don't meet the match threshold score, we can be sure they will never link to anything.
linker.unlinkables_chart()
In the above chart, we can see that about 1.3% of records in the input dataset are unlinkable at a threshold match weight of 6.11 (correponding to a match probability of around 98.6%)
Next steps¶
Now we have trained a model, we can move on to using it predict matching records.