A number of AI algorithms operate in a “black box” manner, meaning that it is difficult to understand how the system has arrived at a decision. A black-box model will not explain itself and give the logic used to produce certain results. The increasing use of “black box” models has sparked a debate about algorithmic accountability and led to calls for increased transparency in algorithmic decision-making, including both transparency in the form of explanation towards individuals and transparency in the form of audits that enable expert third-party oversight.

More and more tax administrations are using algorithms and automated decision-making systems for taxpayer profiling and risk assessment. While both provide benefits in terms of better resource allocation and cost-efficiency, their use is not free from legal concerns.

The General Data Protection Regulation (GDPR) contains four articles that explicitly address algorithmic decision-making. Article 22 of the GDPR addresses “automated individual decision-making, including profiling.” It gives an individual the right to opt out from automatic processing. This means that it is not permitted to make decisions about people using an automated process unless they give their consent that automated decision-making can be used. A company applying automated decision-making tools must implement “suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests”.

Articles 13, 14, and 15 of the GDPR establish the “right to explanation” by requiring organizations handling personal data of EU citizens to provide them with an explanation as to how an automated decision has been arrived at. The Guidelines on Automated Decision-Making note that “complexity is no excuse for failing to provide information.” The organization must mention “factors taken into account for the decision-making process” and “their respective ‘weight’ in an aggregate level.” It does not need to provide a complex mathematical explanation about how algorithms work or disclose the algorithm itself, but the information provided must be comprehensive enough for the individual to act upon it - to contest a decision, or to correct inaccuracies or to request erasure. Thus, the GDPR creates a barrier to using “black box” models to make decision about individuals if a suitable explanatory mechanism does not exist.

One of the fundamental guarantees established by the European Convention for the Protection of Human Rights and Fundamental Freedoms (ECHR) is the right to fair trial, which includes the following minimum guarantees: the equality of arms the right of defence, and the presumption of innocence. The principle of equality of arms may be breached if the accused has limited access to his case file or other documents concerning him. The right of defence includes the right to be promptly informed of the nature and cause of the accusation. The accused must be provided with sufficient information to understand fully the extent of the charges against him. He must be given an opportunity to challenge the authenticity of the evidence and to oppose its use.

A decision that is made solely on the basis of a “black box” model will likely be in conflict with the right of defence and equality of arms. If the taxpayer is not aware how the decision was reached, there is no fair balance between the parties. The taxpayer is hindered in his ability to provide evidence as he does not understand which objective factors were taken into account by the algorithm making the decision. Therefore, the use of “black box” models may be questioned from the perspective of the right to fair trial.

This article has provided a brief summary of the legal obstacles to the use of black-box models in the area of taxation. If you are interested in this topic, please take a look at my article Ruled by Algorithms: The Use of ‘Black Box’ Models in Tax Law (Tax Notes International, vol. 95 no. 12, 16 Sep. 2020).