Early in my career I specialised in Computer Security and more specifically Data Security. On one particular engagement I was confronted with a system that had virtually no audit log capability and very limited access control (mainframe based), and the suspicion was that staff was being paid off to alter transactional data.

The tools I had at my disposal was Microsoft Access, a basic CSV transaction log and a copy of Borland Delphi and I focussed on analysing and detecting changes in processing volume of data operators as an indication of suspicious activity, with some good success. Looking back, I wish I knew about Benford’s Law, as that would have certainly made my life much easier. Now 20 years later I work extensively in global payroll within the Microsoft Dynamics 365 ERP market, and while the threat of fraud remains, the tools and processing capability have advanced and improved dramatically.

From Wikipedia: “Benford’s law, also called Newcomb-Benford’s law, law of anomalous numbers, and first-digit law, is an observation about the frequency distribution of leading digits in many real-life sets of numerical data. The law states that in many naturally occurring collections of numbers, the leading significant digit is likely to be small. For example, in sets that obey the law, the number 1 appears as the most significant digit about 30% of the time, while 9 appears as the most significant digit less than 5% of the time. If the digits were distributed uniformly, they would each occur about 11.1% of the time. Benford’s law also makes predictions about the distribution of second digits, third digits, digit combinations, and so on.”

Payroll data as with any ERP financial data can consist of thousands or hundreds of thousands of transactions per pay run. Consider a typical worker with 10 to 15 different payments (or allowances) across a workforce of 5000 workers. This often generates 75,000 or more transactions per pay run and auditing of this volume, which can then be run weekly, fortnightly or monthly (thus 75,000 x 4 per month) presents a significant workload problem. Spot-checking becomes unfeasible unless you could reduce your focus to transactions that may require further scrutiny.

Consider a policy requiring approval of expenses that exceed $300. As long as you submit expenses totalling no more than $290 odd you might be able to sneak this through every so often, and while this is no heist, this amount can still add up over time. Anti-Money Laundering systems often utilize hundreds of rules, one typically detects money transfers exceeding a cut-off of $10,000 before raising a flag requiring bank approval. If you travel internationally often enough, you’ll see that $10,000 amount on arrival and departure cards all the time.

Let’s take a few thousand rows of allowance data, which includes salary and miscellaneous allowances and sanitize it by removing identifying columns, leaving only the amount column.

Our test data is shown below.

I’ll be using a Python library available here that implements Benford’s Law by testing our null hypothesis and displaying a graph showing the digit distribution. A screenshot of the script is shown below, running in Visual Studio Code on Ubuntu Linux.

I’ve modified the script and ran it against our clean, non-modified data and the resulting digit distribution is shown below.

We can see a fairly good expected distribution curve with slight elevation of digit ‘6’ and ‘5’ being a bit low, but still within a fairly normal distribution. You need to understand the data fairly well to explain any deviations such as this. Here it could be that all employees receive a single allowance fixed at $60, producing the elevation. We are experimenting here after all, don’t assume you can load a bunch of numbers from a spreadsheet and this script will become your magic fraud detection silver bullet.

Let’s manually modify our test data by replacing some allowances with random numbers. An extract is shown below and notice the numerous 4xx digit amounts now occurring (my manually modified amount).

Running our script again produces the plot below, clearly indicating an elevation of digit ‘4’ occurring when the natural expectation of occurrence was much less. Other figures are also off as a consequence, especially ‘7’.

With this in hand, we can now isolate these occurrences in our data and perform a deeper inspection and validation of these transactions, the associated workers and approver of the workflow, if that was required. Spot-checking, but across a more narrow area of focus.

For further reading I recommend the work done by Mark Nigrini on the subject.