Skip to content

7. Evaluation

Evaluation of prediction resultsΒΆ

In the previous tutorial, we looked at various ways to visualise the results of our model.

These are useful for evaluating a linkage pipeline because they allow us to understand how our model works and verify that it is doing something sensible. They can also be useful to identify examples where the model is not performing as expected.

In addition to these spot checks, Splink also has functions to perform more formal accuracy analysis. These functions allow you to understand the likely prevalence of false positives and false negatives in your linkage models.

They rely on the existence of a sample of labelled (ground truth) matches, which may have been produced (for example) by human beings. For the accuracy analysis to be unbiased, the sample should be representative of the overall dataset.

# Rerun our predictions to we're ready to view the charts
from splink.duckdb.linker import DuckDBLinker
from splink.datasets import splink_datasets

import altair as alt

df = splink_datasets.fake_1000
linker = DuckDBLinker(df)
linker.load_model("../demo_settings/saved_model_from_demo.json")
df_predictions = linker.predict(threshold_match_probability=0.2)

Load in labelsΒΆ

The labels file contains a list of pairwise comparisons which represent matches and non-matches.

The required format of the labels file is described here.

from splink.datasets import splink_dataset_labels

df_labels = splink_dataset_labels.fake_1000_labels
df_labels.head(5)
labels_table = linker.register_labels_table(df_labels)

Receiver operating characteristic curveΒΆ

A ROC chart shows how the number of false positives and false negatives varies depending on the match threshold chosen. The match threshold is the match weight chosen as a cutoff for which pairwise comparisons to accept as matches.

linker.roc_chart_from_labels_table(labels_table)

Precision-recall chartΒΆ

An alternative representation of truth space is called a precision recall curve.

This can be plotted as follows:

linker.precision_recall_chart_from_labels_table(labels_table)

Truth tableΒΆ

Finally, Splink can also report the underlying table used to construct the ROC and precision recall curves.

roc_table = linker.truth_space_table_from_labels_table(labels_table)
roc_table.as_pandas_dataframe(limit=5)
truth_threshold match_probability row_count p n tp tn fp fn P_rate ... precision recall specificity npv accuracy f1 f2 f0_5 p4 phi
0 -26.442571 1.096460e-08 3176.0 2031.0 1145.0 2031.0 0.0 1145.0 0.0 0.639484 ... 0.639484 1.0 0.000000 1.0 0.639484 0.780104 0.898673 0.689175 0.000000 0.000000
1 -25.337736 2.358204e-08 3176.0 2031.0 1145.0 2031.0 47.0 1098.0 0.0 0.639484 ... 0.649089 1.0 0.041048 1.0 0.654282 0.787209 0.902426 0.698082 0.143357 0.163229
2 -24.371460 4.607438e-08 3176.0 2031.0 1145.0 2031.0 154.0 991.0 0.0 0.639484 ... 0.672071 1.0 0.134498 1.0 0.687972 0.803879 0.911089 0.719244 0.366200 0.300653
3 -24.370218 4.611406e-08 3176.0 2031.0 1145.0 2031.0 199.0 946.0 0.0 0.639484 ... 0.682230 1.0 0.173799 1.0 0.702141 0.811102 0.914782 0.728531 0.433861 0.344341
4 -23.939989 6.213625e-08 3176.0 2031.0 1145.0 2031.0 230.0 915.0 0.0 0.639484 ... 0.689409 1.0 0.200873 1.0 0.711902 0.816154 0.917344 0.735071 0.474565 0.372134

5 rows Γ— 25 columns

Further Reading

For more on the quality assurance tools in Splink, please refer to the Evaluation API documentation.

πŸ“Š For more on the charts used in this tutorial, please refer to the Charts Gallery.

For more on the Evaluation Metrics used in this tutorial, please refer to the Edge Metrics guide.

That's it!ΒΆ

That wraps up the Splink tutorial! Don't worry, there are still plenty of resources to help on the next steps of your Splink journey:

For some end-to-end notebooks of Splink pipelines, check out our Examples

For more deepdives into the different aspects of Splink, and record linkage more generally, check out our Topic Guides

For a reference on all the functionality avalable in Splink, see our Documentation