[INTERVIEW] The legal and social challenges of Extended Reality worlds: 3 questions to Brittan Heller
28 February 2023
[INTERVIEW] The specificities of the Russian digital sovereignty agenda: 3 questions to Marie-Gabrielle Bertran
8 March 2023

[ARTICLE] Artificial Intelligence, the Brazilian judiciary and some conundrums

By Eduardo Villa Coimbra Campos

Despite being used in diverse technological fields, Artificial Intelligence (AI) has overwhelmed the judiciary. Courts around the world have been seeking AI technology to try to solve their difficulties, such as structural and personnel insufficiency, faced with a growing increase in litigation, but also as a response to the demands generated by the new tools used in the legal sector (automatic writing programs for petitions and predictive systems with analysis of judicial decisions and statistics, among others). Consequently, the Judiciary is looking to AI to develop systems and programs for statistical assistance and selections, but also for the preparation of drafts and judgments using databases and Machine Learning.

These are the challenges that I chose to call “endogenous,” problems and difficulties that arise within the Judicial System itself, from the questions raised by the AI systems and programs used by the Judiciary and that will need to be faced by the system itself, in an (almost) autopoietic way.

In the United States, for example, of the various ongoing initiatives, those that have raised the strongest criticism are the ones related to criminal judicial proceedings where there is often the need to predict the possibility of the person being prosecuted to reoffend (or to commit dangerous acts). In the context of bail, for example, judges must assess whether individuals are likely to return to court for trial, and whether they are likely to engage in criminal acts if they are not held in pre-trial detention. When sentencing a defendant, the judge partly considers the probability that the person will reoffend if released after a certain period. American criminal justice has seen the widespread and growing use of predictive algorithms in bail, sentencing, and parole contexts (DEEKS, 2019). However, it has faced criticism and questioning of different orders (ZLIOBAITE et al, 2016), in particular, those involving the possibility of algorithmic evaluations influenced by biases and prejudices; a very serious example occurred when an investigation revealed that software widely used to assess the risk of recidivism in criminals was twice as likely to incorrectly flag black defendants as having a higher risk of committing future crimes, whilst being twice as likely to incorrectly flag white defendants as low-risk (NYT-CRAWFORD, 2016).

On the other hand, in China, often seen as one of the most conservative and restrained societies in legal terms, the judiciary system adopted the aggressive approach of a technology company, promoting the application of electronic technologies in judicial processes, and the developments happened so fast that even members of the Chinese judicial system are failing to keep up with the magnitude of changes taking place (WANG, 2021).

The Brazilian regulation for the use of AI by the Judiciary

In Brazil, AI has already been adopted in at least half of the Brazilian courts, with at least 47 having AI programs and systems in use or under development, including the Supreme Court, and the Superior Court of Justice (FGV, 2022).

Although there is still no specific Brazilian legislation for AI in general (some proposals are still in debate), the National Council of Justice, the regulatory and supervisory body of the Brazilian Judiciary, issued Resolution 332, of August 21, 2020, which “deals with ethics, transparency and governance in the production and use of Artificial Intelligence in the Judiciary and other provisions.(CNJ, 2020) Along the same lines of the European Charter (CEPEJ, 2019), it establishes the following principles: respect for fundamental rights, non-discrimination, equality, security, publicity, transparency, user control, governance and quality.

In the chapter on the principle of user control, Article 18 establishes the need for information, in clear and precise language, about the use of the intelligent systems in the services provided, and Article 19 states that systems that use AI models as an auxiliary tool for the elaboration of judicial decisions must observe, as a preponderant criterion, the explanation of the “steps that led to the result.” Although somewhat abstract, these are commendable measures aimed at minimizing the effects of the so-called opacity or unfathomability of the system.

It is also worth mentioning that the Brazilian resolution presents a relevant preventive measure that can help minimize biases by providing (Article 20) the need to seek diversity “in its broadest spectrum, including gender, race, ethnicity, color, sexual orientation, people with disabilities, generation and other individual characteristics” in the formation and composition of teams for research and development of AI solutions.

However, in addition to such provisions that can be seen as positive, it also contains some that inspire care and reflection. One of them is the provision of Article 23, which states that “the use of Artificial Intelligence models in criminal matters should not be encouraged, especially with regard to the suggestion of models for predictive decisions”. Although understandable, given the relevance of the matter that deals with one of the most important values to human beings (freedom), it contradicts a certain global trend which, if well regulated and used, could help in the work of judges, providing faster and more effective results, for the benefit of society and the defendants themselves. Furthermore, the provision of its §2 may establish a measure of intangible practical implementation, as it will be, at the very least, complex to create a device or instrument that can predict which conclusion “the judge” would reach and whether such conclusion would be more beneficial or harmful than that reached by the system itself.

Finally, the most worrisome provision is the one contained in Article 19, which states that computer systems to aid in the preparation of decision “should allow the supervision of the competent judge.” Even though the need for human supervision is essential, this article, by not registering and expressly demanding the obligation of supervision, but only the need for supervision permission by a human judge, leaves space for interpretations that systems working independently of the necessary human supervision are admissible, which is very disturbing.

As a consequence, the resolution under analysis constitutes a pioneering effort of undeniable importance, but fails, owing to its generality and vagueness, not providing more effective inspection mechanisms for its observance and application; and therefore, is still insufficient for the magnitude of the transformation that is already underway. This becomes even more evident when analyzing the systems already in use in the Brazilian Judicial System, some of which are presented below.

Athos System, Victor Project, Synapses Platform and some other dilemmas

In 2018 (before the edition of the resolution of the National Council of Justice was issued), the higher courts of Brazil started the development of their own AI systems, the Project VICTOR of the Brazilian Supreme Court (“The Guardian of the Brazilian Constitution“) and ATHOS System of the Superior Court of Justice (“The Guardian of Federal Law“).

Although they have somewhat different mechanisms, both systems work in a relatively similar way and have, as their main functionality, the identification of precedents that can be replicated in other cases at the national level. The VICTOR works to identify what the Brazilian legislation calls “Topics of General Repercussion” for the Supreme Court and ATHOS on what is called “Repetitive Appeals” for the Superior Court of Justice.

Such systems should work as appeal filters and aim to decrease the number of cases to be judged by these courts. However, despite the increased speed and efficiency of processing appeals, its implementation may give rise to valid and legitimate questions.

In fact, starting from the previously mentioned opaqueness of AI mechanisms or the unfathomability of its operation, it is first relevant to highlight that these systems have been in use for almost five years, and thousands of appeals were admitted and others rejected, with algorithmic selection or its assistance, without all users having effective knowledge and perfect understanding of how they operate. To illustrate the relevance of using these two systems and the concerns they can generate, it is worth bringing up the judgment of Repetitive Appeal Theme n. 444 of the Superior Court of Justice (STJ, 2020), which defined the statute of limitations for the redirection of Tax Execution. According to data from the National Council of Justice, at the time of its appreciation, this caused a suspension of about 12,000 processes in the national territory until the resolution of the controversy, but it is thought that the impact of this issue may have reached approximately six million cases (SANSEVERINO e MARCHIORI, 2020).

From this perspective, despite having greater accuracy and speed than human analysis would be capable of, algorithmic selection is not entirely error-free. In this sense, any rejection or non-classification of the appeal as of general repercussion may end up harming the right for defense and the Rule of Law itself if the machine, “improved” through Machine Learning, fostered by repeated analysis and supplied by an ever-increasing database, makes a mistake. As it’s known, the development of Machine Learning can generate unexpected, unpredictable and often surprising results.

The potential damage in such a situation, if not corrected in time, can generate “injustice” in its broadest sense for hundreds, perhaps thousands of appeals and, ultimately, to the citizens who are the main recipients of the judicial system.

For example, this could happen in a situation where information coming from the database eventually generates an interpretative distortion on a certain “topic” that, if examined by a “human judge” (or “human intelligence”), could be the object of the necessary distinguishing. However, the machine may not be capable of carrying out this kind of judgment, as it may not yet have sufficient information on specific peculiarities in its database. This could also generate a “cascade effect” and, with Machine Learning, be leveraged in a mistaken and biased way, generating applications of peculiar understandings or adjudication to cases that demanded a distinctive analysis, despite founded on the same “root” precedent or set of precedents. It should be noted that this does not mean, of course, that such a hypothesis cannot be verified, to a greater or lesser extent, with human analysis, which is itself subject to failure. In any case, given the magnitude of the analyses carried out by VICTOR and ATHOS, precisely what they propose to do best–that is, to reduce the analysis time and increase the quantity–it is a hypothesis that cannot be neglected, deserving debate and developing.

In the specific case of the ATHOS system, some difficulties were reported in the monitoring process of its results (STJ Process n. 028532/2020), such as the existence of conflicts between the system and the security mechanisms, as well as, in the internal aspect, it is concluded that “the model currently in training did not converge sufficiently, applicant adjustments in the data sample used.” Furthermore, a study was conducted to validate the gains and benefits arising from the use of the system, and it was found that the model does not work well with short documents, which forced the implementation of filters for this condition to avoid inappropriate responses (FGV, 2022).

In this sense, the dimensions of any problem in its operation are obvious, considering the magnitude of the consequences of the repetitive system shown in the example above and the possibility of the occurrence of an error or malfunction of the system, either due to lack of convergence, poor or insufficient training, some inconsistency in the database, or any other difficulty or inconsistency.

In addition, while Machine Learning strategies may bring significant gains in efficiency and speed, as already mentioned, they also have the potential to generate undesirable results, originally unforeseen, which is, if not commonplace, at least expected in the future.

Although the gains which have been achieved are clear, these systems that are in full operation will not achieve adequate improvement without complete and easy access to information for users. Consequently, there is a need for greater transparency and public availability of information about its use, operation, benefits, and possible problems.

As a matter of fact, the public and facilitated availability of such information is directly related to the Brazilian constitutional principle of (general) publicity of the Public Administration (Article 37 of the Brazilian Constitution of 1988, but also with the publicity strictu sensu, also foreseen in the Brazilian Constitution in its articles 5, item LV and 93, IX and in ordinary legislation (Article 189 of  the Brazilian Civil Procedural Code), among others, as well as in international treaties to which Brazil is a signatory (such as Article 8, paragraph 5 of the American Convention on Human Rights).

Finally, the search for increased productivity and efficiency of these systems should not overshadow the need for human and political analysis, supervision and revision, which are essential for its rational functioning and for the respect of the Rule of Law.

Such systems were implemented before the resolution of the National Council of Justice, and there is no news that they have been adapted to its parameters, perhaps because it is understood that they have already followed them. Incidentally, it is important to note that the Brazilian Supreme Court is not subjected to the supervision of the National Council of Justice or its regulations, so its AI system (VICTOR) does not need, in a cogent way, to follow such regulation. Moreover, the National Council of Justice itself, from which the aforementioned resolution emanated, has a unifying and accelerating system for the use of AI in the Judiciary, named the SINAPSE Platform, a platform for the development and large-scale availability of AI prototypes. Known as the “AI Model Factory,” it aims  to facilitate the development of AI systems, making prototypes available at a scale, which is not achievable when developed in a traditional way (CNJ, 2019).

As a result, AI has been widely used in the Brazilian judicial system. Additionally, despite the official and proprietary systems, there is another possibility that deserves attention and causes me particular concern, the indiscriminate use of the much talked about ChatGPT. Recently, the use of such a system by a single Colombian judge has been reported (The Guardian, 2023), and in Brazil, there has been great enthusiasm related to its use. In fact, in addition to the “informal” use by judges and law clerks, the Court of Justice of Minas Gerais State disclosed the development of its own system based on ChatGPT, named SAVIA (Artificial Intelligence Virtual Assistant System).

Notwithstanding the wide range of possibilities for increasing judicial productivity and expanding access to justice beyond the imprecision and failures of the system in question, its use has caused debates and questions in diverse sectors, and with the judiciary, it would not be different. Firstly, it is important to note that the impossibility of identifying and tracking the data and sources that give rise to the answers given by the bot in question can generate insecurity in its use, given the impossibility of ensuring that the sources are not contaminated with bias, inaccuracies, or even bad faith. Also, it is possible that its use will infringe on the intellectual rights of the authors of these same sources, without due reference or recognition, as have already been questioned in other sectors, such as lawsuits dealing with AI art systems. Likewise, although it is desirable that the product of its use be reviewed and corrected by judges, in principle, there is no way to ensure that this occurs in practice. In addition, as mentioned, despite providing the possibility of human review, the current resolution does not require it as a mandatory rule. In this sense, the judges themselves may also not be aware that their law clerks and assistants use the system to prepare drafts that they submit for the judge review. Beyond the trust that must exist between judges and their law clerks, there is not much that can be done about this, as control will be practically impossible.

Further, the use by judges of AI Systems in general and ChatGPT in particular raises concerns directly related to the possibility or right to be judged by another human being, or even, at least to learn about the use of the system in making the decision that will solve his case. The current Brazilian resolution demands the need for information on the judicial use of AI systems and an explanation of the steps taken by the system to reach a decision, but it is dealing, in principle, with institutional systems and aiming in its development by Brazilian courts. However, it did not foresee the use of widespread AI mechanisms such as ChatGPT and the apparent inherent impossibility of explaining its response process. From another perspective, the draft of the main proposal for Brazilian law for the regulation of the general use of AI requires the right to prior information regarding its interactions with artificial intelligence systems and the explanation of the decisions, recommendations, or predictions made by AI systems. This is a broader solution than the resolution of the National Council of Justice, and may perhaps encompass the judicial use of ChatGPT. However, given the volatility of the Brazilian legislative process, many changes can still be made, and one cannot be sure of the final text.

In any case, whether on a legal or regulatory level, it should be considered whether the non-institutional use of ChatGPT should be free to judicial use, and with what scope and detail should the litigants be informed. Therefore, given the inexorable use of AI in general and ChatGPT, particularly by the Judicial System, it is necessary to reflect on its limits and some parameters and, therefore, for a minimally responsible and ethical use, it will be necessary that the decisions generated with their assistance express reference to their use, providing the necessary transparency even for the purpose of providing a resource that can be used as an argument for possible inaccuracy of the system, as well as insurgencies using the same resource.

In this context, despite being undesired by a large part of the legal community, the deeper question about the partial replacement of judges by machines is perhaps inevitable and in a not-too-distant future in the current Brazilian legal and regulatory context, but it is essential that this is done with ethical foundations, with full knowledge by the users of the judicial system, especially its main addressee, the citizen.


Eduardo Villa Coimbra Campos is a Judge at the Court of Justice of the State of Paraná, Brazil and former Public Attorney of Mato Grosso do Sul State. He is also a Professor at the Judicial School of Paraná with a Master in Law at Vale do Rio dos Sinos University-Unisinos. Eduardo is currently collaborating as a guest researcher in the International survey on Judges and Technology of the University of Newcastle, Australia.

Note: This article was written based on an adapted excerpt, with changes and additions, from Eduardo‘s yet to be published Master’s thesis “Challenges of the Implementation of Artificial Intelligence In the Brazilian Judicial System: How Academy and the Judiciary can work together to rationalize the transformations resulting from the adoption of Artificial Intelligence in the Brazilian Judicial System”.