Fig. 1
From: “Won’t get fooled again”: statistical fault detection in COVID-19 Latin American data

Comparing distributions with KLD in continuous distribution with different levels of entropy measured by KLD. A shows two probability distributions with low divergence (KLD = .02), meaning that few information changes would be required to encode p(x1) as p(x2). B displays two distributions with a higher divergence (KLD = .21)