Deduplicate 50k rows historical persons
Linking a dataset of real historical persons¶
In this example, we deduplicate a more realistic dataset. The data is based on historical persons scraped from wikidata. Duplicate records are introduced with a variety of errors introduced.
Create a boto3 session to be used within the linker
import boto3
boto3_session = boto3.Session(region_name="eu-west-1")
AthenaLinker Setup¶
To work nicely with Athena, you need to outline various filepaths, buckets and the database(s) you wish to interact with.
The AthenaLinker has three required inputs: * input_table_or_tables - the input table to use for linking. This can either be a table in a database or a pandas dataframe * output_database - the database to output all of your splink tables to. * output_bucket - the s3 bucket you wish any parquet files produced by splink to be output to.
and two optional inputs: * output_filepath - the s3 filepath to output files to. This is an extension of output_bucket and dictate the full filepath your files will be output to. * input_table_aliases - the name of your table within your database, should you choose to use a pandas df as an input.
# Set the output bucket and the additional filepath to write outputs to
############################################
# EDIT THESE BEFORE ATTEMPTING TO RUN THIS #
############################################
from splink.backends.athena import AthenaAPI
bucket = "MYTESTBUCKET"
database = "MYTESTDATABASE"
filepath = "MYTESTFILEPATH" # file path inside of your bucket
aws_filepath = f"s3://{bucket}/{filepath}"
db_api = AthenaAPI(
boto3_session,
output_bucket=bucket,
output_database=database,
output_filepath=filepath,
)
import splink.comparison_library as cl
from splink import block_on
from splink import Linker, SettingsCreator, splink_datasets
df = splink_datasets.historical_50k
settings = SettingsCreator(
link_type="dedupe_only",
blocking_rules_to_generate_predictions=[
block_on("first_name", "surname"),
block_on("surname", "dob"),
],
comparisons=[
cl.ExactMatch("first_name").configure(term_frequency_adjustments=True),
cl.LevenshteinAtThresholds("surname", [1, 3]),
cl.LevenshteinAtThresholds("dob", [1, 2]),
cl.LevenshteinAtThresholds("postcode_fake", [1, 2]),
cl.ExactMatch("birth_place").configure(term_frequency_adjustments=True),
cl.ExactMatch("occupation").configure(term_frequency_adjustments=True),
],
retain_intermediate_calculation_columns=True,
)
from splink.exploratory import profile_columns
profile_columns(df, db_api, column_expressions=["first_name", "substr(surname,1,2)"])
from splink.blocking_analysis import (
cumulative_comparisons_to_be_scored_from_blocking_rules_chart,
)
from splink import block_on
cumulative_comparisons_to_be_scored_from_blocking_rules_chart(
table_or_tables=df,
db_api=db_api,
blocking_rules=[block_on("first_name", "surname"), block_on("surname", "dob")],
link_type="dedupe_only",
)
import splink.comparison_library as cl
from splink import Linker, SettingsCreator
settings = SettingsCreator(
link_type="dedupe_only",
blocking_rules_to_generate_predictions=[
block_on("first_name", "surname"),
block_on("surname", "dob"),
],
comparisons=[
cl.ExactMatch("first_name").configure(term_frequency_adjustments=True),
cl.LevenshteinAtThresholds("surname", [1, 3]),
cl.LevenshteinAtThresholds("dob", [1, 2]),
cl.LevenshteinAtThresholds("postcode_fake", [1, 2]),
cl.ExactMatch("birth_place").configure(term_frequency_adjustments=True),
cl.ExactMatch("occupation").configure(term_frequency_adjustments=True),
],
retain_intermediate_calculation_columns=True,
)
linker = Linker(df, settings, db_api=db_api)
linker.training.estimate_probability_two_random_records_match(
[
block_on("first_name", "surname", "dob"),
block_on("substr(first_name,1,2)", "surname", "substr(postcode_fake, 1,2)"),
block_on("dob", "postcode_fake"),
],
recall=0.6,
)
Probability two random records match is estimated to be 0.000136.
This means that amongst all possible pairwise record comparisons, one in 7,362.31 are expected to match. With 1,279,041,753 total possible comparisons, we expect a total of around 173,728.33 matching pairs
linker.training.estimate_u_using_random_sampling(max_pairs=5e6)
----- Estimating u probabilities using random sampling -----
Estimated u probabilities using random sampling
Your model is not yet fully trained. Missing estimates for:
- first_name (no m values are trained).
- surname (no m values are trained).
- dob (no m values are trained).
- postcode_fake (no m values are trained).
- birth_place (no m values are trained).
- occupation (no m values are trained).
blocking_rule = block_on("first_name", "surname")
training_session_names = (
linker.training.estimate_parameters_using_expectation_maximisation(blocking_rule)
)
----- Starting EM training session -----
Estimating the m probabilities of the model by blocking on:
(l."first_name" = r."first_name") AND (l."surname" = r."surname")
Parameter estimates will be made for the following comparison(s):
- dob
- postcode_fake
- birth_place
- occupation
Parameter estimates cannot be made for the following comparison(s) since they are used in the blocking rules:
- first_name
- surname
Iteration 1: Largest change in params was -0.526 in probability_two_random_records_match
Iteration 2: Largest change in params was -0.0321 in probability_two_random_records_match
Iteration 3: Largest change in params was 0.0109 in the m_probability of birth_place, level `Exact match on birth_place`
Iteration 4: Largest change in params was -0.00341 in the m_probability of birth_place, level `All other comparisons`
Iteration 5: Largest change in params was -0.00116 in the m_probability of dob, level `All other comparisons`
Iteration 6: Largest change in params was -0.000547 in the m_probability of dob, level `All other comparisons`
Iteration 7: Largest change in params was -0.00029 in the m_probability of dob, level `All other comparisons`
Iteration 8: Largest change in params was -0.000169 in the m_probability of dob, level `All other comparisons`
Iteration 9: Largest change in params was -0.000105 in the m_probability of dob, level `All other comparisons`
Iteration 10: Largest change in params was -6.87e-05 in the m_probability of dob, level `All other comparisons`
EM converged after 10 iterations
Your model is not yet fully trained. Missing estimates for:
- first_name (no m values are trained).
- surname (no m values are trained).
blocking_rule = block_on("dob")
training_session_dob = (
linker.training.estimate_parameters_using_expectation_maximisation(blocking_rule)
)
----- Starting EM training session -----
Estimating the m probabilities of the model by blocking on:
l."dob" = r."dob"
Parameter estimates will be made for the following comparison(s):
- first_name
- surname
- postcode_fake
- birth_place
- occupation
Parameter estimates cannot be made for the following comparison(s) since they are used in the blocking rules:
- dob
Iteration 1: Largest change in params was -0.355 in the m_probability of first_name, level `Exact match on first_name`
Iteration 2: Largest change in params was -0.0383 in the m_probability of first_name, level `Exact match on first_name`
Iteration 3: Largest change in params was 0.00531 in the m_probability of postcode_fake, level `All other comparisons`
Iteration 4: Largest change in params was 0.00129 in the m_probability of postcode_fake, level `All other comparisons`
Iteration 5: Largest change in params was 0.00034 in the m_probability of surname, level `All other comparisons`
Iteration 6: Largest change in params was 8.9e-05 in the m_probability of surname, level `All other comparisons`
EM converged after 6 iterations
Your model is fully trained. All comparisons have at least one estimate for their m and u values
linker.visualisations.match_weights_chart()
linker.evaluation.unlinkables_chart()
df_predict = linker.inference.predict()
df_e = df_predict.as_pandas_dataframe(limit=5)
df_e
match_weight | match_probability | unique_id_l | unique_id_r | first_name_l | first_name_r | gamma_first_name | tf_first_name_l | tf_first_name_r | bf_first_name | ... | bf_birth_place | bf_tf_adj_birth_place | occupation_l | occupation_r | gamma_occupation | tf_occupation_l | tf_occupation_r | bf_occupation | bf_tf_adj_occupation | match_key | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 27.149493 | 1.000000 | Q2296770-1 | Q2296770-12 | thomas | rhomas | 0 | 0.028667 | 0.000059 | 0.455194 | ... | 160.713933 | 4.179108 | politician | politician | 1 | 0.088932 | 0.088932 | 22.916859 | 0.441273 | 1 |
1 | 1.627242 | 0.755454 | Q2296770-1 | Q2296770-15 | thomas | clifford, | 0 | 0.028667 | 0.000020 | 0.455194 | ... | 0.154550 | 1.000000 | politician | <NA> | -1 | 0.088932 | NaN | 1.000000 | 1.000000 | 1 |
2 | 29.206505 | 1.000000 | Q2296770-1 | Q2296770-3 | thomas | tom | 0 | 0.028667 | 0.012948 | 0.455194 | ... | 160.713933 | 4.179108 | politician | politician | 1 | 0.088932 | 0.088932 | 22.916859 | 0.441273 | 1 |
3 | 13.783027 | 0.999929 | Q2296770-1 | Q2296770-7 | thomas | tom | 0 | 0.028667 | 0.012948 | 0.455194 | ... | 0.154550 | 1.000000 | politician | <NA> | -1 | 0.088932 | NaN | 1.000000 | 1.000000 | 1 |
4 | 29.206505 | 1.000000 | Q2296770-2 | Q2296770-3 | thomas | tom | 0 | 0.028667 | 0.012948 | 0.455194 | ... | 160.713933 | 4.179108 | politician | politician | 1 | 0.088932 | 0.088932 | 22.916859 | 0.441273 | 1 |
5 rows × 38 columns
You can also view rows in this dataset as a waterfall chart as follows:
records_to_plot = df_e.to_dict(orient="records")
linker.visualisations.waterfall_chart(records_to_plot, filter_nulls=False)
clusters = linker.clustering.cluster_pairwise_predictions_at_threshold(
df_predict, threshold_match_probability=0.95
)
Completed iteration 1, root rows count 641
Completed iteration 2, root rows count 187
Completed iteration 3, root rows count 251
Completed iteration 4, root rows count 75
Completed iteration 5, root rows count 23
Completed iteration 6, root rows count 30
Completed iteration 7, root rows count 34
Completed iteration 8, root rows count 30
Completed iteration 9, root rows count 9
Completed iteration 10, root rows count 5
Completed iteration 11, root rows count 0
linker.visualisations.cluster_studio_dashboard(
df_predict,
clusters,
"dashboards/50k_cluster.html",
sampling_method="by_cluster_size",
overwrite=True,
)
from IPython.display import IFrame
IFrame(src="./dashboards/50k_cluster.html", width="100%", height=1200)