Febrl3 Dedupe
Deduplicating the febrl3 dataset¶
See A.2 here and here for the source of this data
import pandas as pd
import altair as alt
from splink.datasets import splink_datasets
df = splink_datasets.febrl3
df = df.rename(columns=lambda x: x.strip())
df["cluster"] = df["rec_id"].apply(lambda x: "-".join(x.split('-')[:2]))
# dob and ssn needs to be a string for fuzzy comparisons like levenshtein to be applied
df["date_of_birth"] = df["date_of_birth"].astype(str).str.strip()
df["date_of_birth"] = df["date_of_birth"].replace("", None)
df["soc_sec_id"] = df["soc_sec_id"].astype(str).str.strip()
df["soc_sec_id"] = df["soc_sec_id"].replace("", None)
df["postcode"] = df["postcode"].astype(str).str.strip()
df["postcode"] = df["postcode"].replace("", None)
df.head(2)
rec_id | given_name | surname | street_number | address_1 | address_2 | suburb | postcode | state | date_of_birth | soc_sec_id | cluster | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | rec-1496-org | mitchell | green | 7 | wallaby place | delmar | cleveland | 2119 | sa | 19560409 | 1804974 | rec-1496 |
1 | rec-552-dup-3 | harley | mccarthy | 177 | pridhamstreet | milton | marsden | 3165 | nsw | 19080419 | 6089216 | rec-552 |
from splink.duckdb.linker import DuckDBLinker
settings = {
"unique_id_column_name": "rec_id",
"link_type": "dedupe_only",
}
linker = DuckDBLinker(df, settings)
linker.missingness_chart()
linker.profile_columns(list(df.columns))
from splink.duckdb.blocking_rule_library import block_on
blocking_rules = [
block_on("soc_sec_id"),
block_on("given_name"),
block_on("surname"),
block_on("date_of_birth"),
block_on("postcode"),
]
linker.cumulative_num_comparisons_from_blocking_rules_chart(blocking_rules)
from splink.duckdb.linker import DuckDBLinker
import splink.duckdb.comparison_library as cl
import splink.duckdb.comparison_template_library as ctl
settings = {
"unique_id_column_name": "rec_id",
"link_type": "dedupe_only",
"blocking_rules_to_generate_predictions": blocking_rules,
"comparisons": [
ctl.name_comparison("given_name", term_frequency_adjustments=True),
ctl.name_comparison("surname", term_frequency_adjustments=True),
ctl.date_comparison("date_of_birth",
damerau_levenshtein_thresholds=[],
cast_strings_to_date=True,
invalid_dates_as_null=True,
date_format="%Y%m%d"),
cl.levenshtein_at_thresholds("soc_sec_id", [2]),
cl.exact_match("street_number", term_frequency_adjustments=True),
cl.exact_match("postcode", term_frequency_adjustments=True),
],
"retain_intermediate_calculation_columns": True,
}
linker = DuckDBLinker(df, settings)
deterministic_rules = [
"l.soc_sec_id = r.soc_sec_id",
"l.given_name = r.given_name and l.surname = r.surname and l.date_of_birth = r.date_of_birth",
"l.given_name = r.surname and l.surname = r.given_name and l.date_of_birth = r.date_of_birth"
]
linker.estimate_probability_two_random_records_match(deterministic_rules, recall=0.9)
Probability two random records match is estimated to be 0.000528.
This means that amongst all possible pairwise record comparisons, one in 1,893.56 are expected to match. With 12,497,500 total possible comparisons, we expect a total of around 6,600.00 matching pairs
linker.estimate_u_using_random_sampling(max_pairs=1e6)
----- Estimating u probabilities using random sampling -----
FloatProgress(value=0.0, layout=Layout(width='auto'), style=ProgressStyle(bar_color='black'))
Estimated u probabilities using random sampling
Your model is not yet fully trained. Missing estimates for:
- given_name (no m values are trained).
- surname (no m values are trained).
- date_of_birth (no m values are trained).
- soc_sec_id (no m values are trained).
- street_number (no m values are trained).
- postcode (no m values are trained).
em_blocking_rule_1 = block_on("substr(date_of_birth,1,3)")
em_blocking_rule_2 = block_on("substr(postcode,1,2)")
session_dob = linker.estimate_parameters_using_expectation_maximisation(em_blocking_rule_1)
session_postcode = linker.estimate_parameters_using_expectation_maximisation(em_blocking_rule_2)
----- Starting EM training session -----
Estimating the m probabilities of the model by blocking on:
SUBSTR(l."date_of_birth", 1, 3) = SUBSTR(r."date_of_birth", 1, 3)
Parameter estimates will be made for the following comparison(s):
- given_name
- surname
- soc_sec_id
- street_number
- postcode
Parameter estimates cannot be made for the following comparison(s) since they are used in the blocking rules:
- date_of_birth
FloatProgress(value=0.0, layout=Layout(width='auto'), style=ProgressStyle(bar_color='black'))
FloatProgress(value=0.0, layout=Layout(width='auto'), style=ProgressStyle(bar_color='black'))
Iteration 1: Largest change in params was -0.508 in probability_two_random_records_match
FloatProgress(value=0.0, layout=Layout(width='auto'), style=ProgressStyle(bar_color='black'))
Iteration 2: Largest change in params was -0.0388 in the m_probability of soc_sec_id, level `All other comparisons`
FloatProgress(value=0.0, layout=Layout(width='auto'), style=ProgressStyle(bar_color='black'))
Iteration 3: Largest change in params was -0.00602 in the m_probability of soc_sec_id, level `All other comparisons`
FloatProgress(value=0.0, layout=Layout(width='auto'), style=ProgressStyle(bar_color='black'))
Iteration 4: Largest change in params was -0.000955 in the m_probability of soc_sec_id, level `All other comparisons`
FloatProgress(value=0.0, layout=Layout(width='auto'), style=ProgressStyle(bar_color='black'))
Iteration 5: Largest change in params was -0.000155 in the m_probability of soc_sec_id, level `All other comparisons`
FloatProgress(value=0.0, layout=Layout(width='auto'), style=ProgressStyle(bar_color='black'))
Iteration 6: Largest change in params was -2.55e-05 in the m_probability of soc_sec_id, level `All other comparisons`
EM converged after 6 iterations
Your model is not yet fully trained. Missing estimates for:
- date_of_birth (no m values are trained).
----- Starting EM training session -----
Estimating the m probabilities of the model by blocking on:
SUBSTR(l."postcode", 1, 2) = SUBSTR(r."postcode", 1, 2)
Parameter estimates will be made for the following comparison(s):
- given_name
- surname
- date_of_birth
- soc_sec_id
- street_number
Parameter estimates cannot be made for the following comparison(s) since they are used in the blocking rules:
- postcode
FloatProgress(value=0.0, layout=Layout(width='auto'), style=ProgressStyle(bar_color='black'))
Iteration 1: Largest change in params was -0.227 in probability_two_random_records_match
Iteration 2: Largest change in params was -0.0159 in the m_probability of soc_sec_id, level `All other comparisons`
Iteration 3: Largest change in params was -0.001 in the m_probability of soc_sec_id, level `All other comparisons`
Iteration 4: Largest change in params was -7.04e-05 in the m_probability of soc_sec_id, level `All other comparisons`
EM converged after 4 iterations
Your model is fully trained. All comparisons have at least one estimate for their m and u values
linker.match_weights_chart()
results = linker.predict(threshold_match_probability=0.2)
FloatProgress(value=0.0, layout=Layout(width='auto'), style=ProgressStyle(bar_color='black'))
linker.roc_chart_from_labels_column("cluster")
FloatProgress(value=0.0, layout=Layout(width='auto'), style=ProgressStyle(bar_color='black'))
pred_errors_df = linker.prediction_errors_from_labels_column("cluster").as_pandas_dataframe()
len(pred_errors_df)
pred_errors_df.head()
clerical_match_score | found_by_blocking_rules | match_weight | match_probability | rec_id_l | rec_id_r | given_name_l | given_name_r | gamma_given_name | tf_given_name_l | ... | postcode_l | postcode_r | gamma_postcode | tf_postcode_l | tf_postcode_r | bf_postcode | bf_tf_adj_postcode | cluster_l | cluster_r | match_key | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1.0 | True | -8.735600 | 0.002340 | rec-1320-dup-1 | rec-1320-dup-4 | amber | kexel | 0 | 0.0044 | ... | 461 | 4061 | 0 | 0.0002 | 0.0006 | 0.216174 | 1.0 | rec-1320 | rec-1320 | 0 |
1 | 1.0 | True | -3.475139 | 0.082505 | rec-941-dup-0 | rec-941-dup-3 | coby | cobuy | 3 | 0.0010 | ... | 3078 | 3088 | 0 | 0.0010 | 0.0008 | 0.216174 | 1.0 | rec-941 | rec-941 | 0 |
2 | 1.0 | True | -0.199954 | 0.465406 | rec-1899-dup-0 | rec-1899-org | thomas | matthew | 0 | 0.0094 | ... | 6117 | 6171 | 0 | 0.0002 | 0.0004 | 0.216174 | 1.0 | rec-1899 | rec-1899 | 0 |
3 | 1.0 | True | -5.459610 | 0.022220 | rec-1727-dup-1 | rec-1727-org | campblel | joshua | 0 | 0.0002 | ... | 3189 | 3198 | 0 | 0.0008 | 0.0008 | 0.216174 | 1.0 | rec-1727 | rec-1727 | 0 |
4 | 1.0 | True | -6.614888 | 0.010100 | rec-75-dup-0 | rec-75-dup-4 | samara | willing | 0 | 0.0014 | ... | 3765 | 3756 | 0 | 0.0012 | 0.0004 | 0.216174 | 1.0 | rec-75 | rec-75 | 0 |
5 rows × 45 columns
records = linker.prediction_errors_from_labels_column("cluster").as_record_dict(limit=10)
linker.waterfall_chart(records)