Search
Close this search box.

ON THIS PAGE

Protecting Judicial Decisions in the Age of Artificial Intelligence A New Challenge to the Rule of Law

Murtada Abdalla kheiri1
1Associate Professor of Civil Law at A’Sharqiyah University, Oman.

Abstract

In a world where a large percentage of children are learning how to interact expertly with electronic devices and navigate mobile operating systems before they can even speak, and have grown up with the non-existence of the idea of “disconnecting” with the constant flow of content on the screen, it is no longer possible to reject development and adapt to the requirements of the new stage as we are on the brink of the fifth industrial revolution [1] in which artificial intelligence technologies merge with the work of the human element, and robots coexist with humans. Whatever the name given to the next technological era, and whatever the number it will be labeled with, it will be the last era of inventions and discoveries that humans alone will accomplish exclusively. This prompts experts in the field to consider that “merging with artificially intelligent technology will be like learning how to live with a new gender” [2].

In recent years in particular, artificial intelligence technology has achieved remarkable successes that have proven its pioneering capabilities, and the discussion about it has not remained confined to academic circles, but has occupied the front rows in the halls of official decision-making, and competing voices have risen around it from the highest platforms [3].

The contributions of AI technologies to combating the coronavirus pandemic are still fresh in our minds. Robots have been used to reduce human contact; there are nursing robots, delivery robots, surveillance drones, sterilizing robots, and mobile robots that detect infected people on the street. Big data technologies are also being used to examine surveillance cameras on the streets to recognize faces, as happened in China, where an algorithm combines the health record, the criminal file, and the public transport travel map to identify all the people who have been in contact with the infected person, and quarantine him as a result [4].

Because this reality has begun to cast its shadow on the judicial systems with the entry of artificial intelligence systems into the justice sector, imposing its challenges on them, and even throwing its problems in their face, the judicial decision has become facing serious challenges that raise legitimate questions on the research table, the answers to which converge with the strategic direction of Oman Vision 2040 in the field of legislation, judiciary, and oversight in terms of strengthening the rule of law in society and achieving community security, and in general, it addresses the application of the foundations of governance in the organization of legislation and judiciary, and revolves around the extent of the benefit of introducing artificial intelligence systems in the stage of preparing the judicial decision and the limits of this benefit on the one hand (section one), and the extent of the possibility of using these systems in the stage of building the judicial decision itself on the other hand (section two).

1. Limits of using Artificial Intelligence in Judicial Work

or the purpose of benefiting from artificial intelligence systems in the field of justice (paragraph one), the scope of Its intervention and demarcation of the boundaries of participation in the judicial decision (second paragraph).

Given the novelty of the concept of artificial intelligence and the diversity of its tasks and activities that it can perform, the definitions given to it by various specialists have varied, so that a group of definitions focused in its formulation on the purpose of its use, while others worked on describing its characteristics. The definitions differed in determining the nature of artificial intelligence, due to the existence of a fundamental difference in the definition of intelligence on the one hand, and the difference between specialists on the concept of what could constitute artificial intelligence in general, on the other hand [5].

On the part of AI scholars, Richard Bellman defined it as “the automation of activities we associate with human intelligence, such as decision-making, problem-solving, or “education.” John McCarthy considered it “the science and engineering of making intelligent machines.” Luger, for his part, defined AI as “the branch of computer science concerned with the automation of intelligent behavior”[6] . Focusing on AI capabilities, it was defined as the ability of a system to correctly interpret external data, learn from this data, and use that knowledge to achieve specific goals and tasks through flexible adaptation [7].

In general, artificial intelligence is a science that enables machines to learn through experience, through intelligent interaction and adaptation to the new data they obtain, which enables them to carry out the required tasks even if they are found in new circumstances and facing new tasks, which means that this technology will enable the machine to think and carry out tasks in the manner of humans, [8] by training it using algorithms, [9] so that it can absorb the data it obtains and learn from it, which means in short that artificial intelligence is based on computer systems designed to interact with the world through the capabilities that we think of as human.

And in terms of defining artificial intelligence in international documents, the United Nations Commission on International Trade Law summarized in its memorandum [10] the difficulties facing the definition process and some of the risks resulting from artificial intelligence systems, shedding light on some of its aspects, by saying that “a number of definitions of artificial intelligence have been developed, but none of them have gained global acceptance. Artificial intelligence in general is the science of deducing systems capable of solving problems and performing functions by simulating mental processes. Artificial intelligence can be taught how to solve a problem, but it is capable of Also, on studying the problem and knowing how to solve it on its own, and different systems can reach different levels of autonomy, and they are able to act independently, and in this regard it is not possible to predict the work of those systems or their results because they act as black boxes.

In turn, the UNESCO World Commission on the Ethics of Scientific Knowledge and Technology (COMEST) focused on the tasks of artificial intelligence, so that the attempt to define it came closer to explanation, as it involves machines capable of imitating certain functions of human intelligence, including features such as perception, learning, thinking, problem solving, linguistic interaction, and even producing creative work . In the context of developing the European draft system for artificial intelligence proposed by the European Parliament and the Council of Europe, Article 3 of it defined artificial intelligence as “a program in which multiple techniques are used in Annex I of the same draft that can generate outcomes such as content, suggestions or decisions determined by the environments with which the program interacts, within a set of human-defined choices”[12].

The Committee of Ministers of the Council of Europe (Conseil des ministres du Conseil de l’Europe) has tasked the Council’s Committee on Artificial Intelligence with developing a binding legal instrument to ensure that the emergence, development and application of AI systems are based on the rules established by the Council of Europe in the field of human rights, democracy and the rule of law, and are grounded in fundamental rights. The Committee has indeed developed the “Zero Draft of the (Framework) Convention on Artificial Intelligence, Human Rights and the Rule of Law” , which was published in January of this year. In its second article, artificial intelligence was defined as “algorithmic systems or any combination of such systems that use computational methods derived from statistics or other mathematical methods in order to perform tasks generally assigned to human intelligence or that usually assume and require the use of such intelligence to perform, and which are intended to assist or replace the judgment of human decision-makers in performing the aforementioned tasks. These tasks include, but are not limited to, prediction, planning, speech, sound and image recognition, text and sound generation, translation, communication, learning and problem solving” [14] . From here, it is noted that the definition of artificial intelligence revolves around two main axes: the axis of action or behavior, by evaluating the work of artificial intelligence, and its success in performance, in relation to the efficiency of the human element (which we will employ in the service of this section of the research, and the axis related to thinking and logic, which is directly related to …. It is directly related to the process of producing judicial decisions by or with the help of artificial intelligence (which is the axis on which the second section will be built) [15].

Despite the difficulty of formulating a comprehensive, unified and clear definition of artificial intelligence, [16] researchers’ attempts have succeeded in coming to reality with abundant material applications of its technologies, from drones to self-driving vehicles, through chatbots to assist customers, facial recognition technology, and reaching medical diagnoses, not to mention the spread of its applications in our daily lives through “recommendation algorithms [17] that have come to adapt all our experiences as users and consumers and are used by e-commerce sites such as Amazon, social media sites, movie platforms such as Netflix, online games such as Steam, and music and videos such as Spotify.

And here is artificial intelligence making rapid strides in palaces of justice around the world. In China, specifically Pudong Province, artificial intelligence has stormed the criminal field through its widest doors, and has been employed to replace the Public Prosecution, and the results of its experiment have achieved a credibility rate of 97% according to the announced figures. In the United States of America, artificial intelligence is also present in the criminal field, as the Public Safety Assessment (PSA) technology is used, among other technologies, to assist the judge in making the decision to keep a person in custody or release him. In the city of Vancouver, Canada, civil courts have been established, which citizens can resort to themselves and complete the litigation procedures before them, through an open tool, Solution Explorer. There is no doubt that artificial intelligence is capable of addressing many of the difficulties facing judicial systems towards better justice, and its use can bring great benefits in improving performance and efficiency by automating many administrative and routine tasks. Ultimately, the use of artificial intelligence in the field of justice can accelerate progress towards the sixteenth goal in In the United Nations 2030 Agenda for Sustainable Development (Peace, Justice and Institutions), Which makes learning it useful and beneficial [18].

Accordingly, the great opportunity provided by the use of these tools to help advance knowledge, scientific production, and practical practices should be recognized. Judges can use AI to sort and analyze documents in a preliminary manner, schedule court sessions, and record the minutes of those sessions using automatic speech recognition and AI-powered transcription techniques to convert spoken language into digitized texts, and provide simultaneous translation during interrogations and hearings of witnesses. AI can also be used to conduct legal research, and even adjudicate simple cases.

The involvement of many countries in the race to test AI technologies in the field of justice presents us with experiments with varying success rates. Brazil’s experience with the Victor program, which uses AI and analyzes cases using NLP Natural Language Processing technology to determine whether cases are appealable, has been met with much criticism.

In contrast, India’s experience in integrating AI into the core of administrative work in courts, where the Supreme Court’s Artificial Intelligence Committee has developed a program that uses NLP technology, has been welcomed. This technology has been used to translate decisions and rulings written in English into local languages, as well as to create a program to review cases brought to the Supreme Court (an average of 70,000 cases per year), sorting them into groups and topics, identifying cases that contain unified legal problems, and rejecting flawed appeals.

For its part, the Estonian Ministry of Justice led a pioneering project by asking its head of data unit, Velberg Ott, to help design a “robot judge” to handle cases worth no more than 7,000 euros [20]. The parties upload the documents and arguments they have on a special platform, so that the artificial intelligence issues its decision in the dispute, which can be appealed before a human judge.

In France, two practical applications of artificial intelligence have been introduced in the administrative work of the courts. First, an artificial intelligence program was created that works to find points of intersection and similarity between decisions of the Court of Cassation in non-criminal cases through an automated filtering of keywords for solutions provided by published decisions of the Court of Cassation, and between appeals submitted, with the aim of initially directing the appeal to the competent cassation chamber according to the distribution of works (civil, commercial, social) [21]. The other artificial intelligence program aims to assist the French Court of Cassation in the process of concealing the identity of litigants. To clarify the benefit of this, it is worth noting that the French Court of Cassation is now responsible for the open data of the French judiciary Open data, allowing the public to view approximately 480,000 judgments and decisions issued by the French judiciary through the search engine Judilibre after replacing the names of litigants Travail de pseudonymisation. The aforementioned program aims to identify the elements that allow the re-identification of the litigants [22] (other than their names, and not their personal data, in order to work on neutralizing them as well, so that the court’s work in concealing their identity is effective.

In addition to these practical examples, [23] another initiative is under study and experimentation. In this context, the French Court of Cassation signed a scientific cooperation agreement with the research center at HEC University in order to numerically estimate the level of difficulty in the cases brought before it by processing the natural language used in the appeals submitted to it. The sample included about 66,000 appeals in order to reach an inference of the relationship between the legal problems raised and the legal materials addressed in the appeals and the result associated with the judicial course of the case. Through this cooperation, the court is working to train the program that works with artificial intelligence to ultimately reach a preliminary classification of the appeals, with the aim of directing them to the nature of the path through which they will be processed, since the appeals before the French Court of Cassation have been subject to a triple classification for processing since October 2021, According to the degree of difficulty[24].

But the great opportunity presented by AI technology must be framed by controls that address the risks that may be imposed by the growing connection between AI and judicial systems.

B. The control of non-exclusivity of decision-making by AI

Today, humans teach machines what they know, and also train them to learn by themselves from the “environment” with which they interact. As their ability to learn on their own increases, predicting and controlling their behavior becomes more difficult, and the level of challenges increases. Experience has shown that the behavior of these systems is not as “innocent” as we expect. For example, people with colored skin have found that programs distort their skin color, or even do not recognize it at all, which causes them many life problems, even with automated faucets in public toilets to lighting settings on iPhones [25].

This example is just a simple embodiment of the types of discrimination shown by the use of AI entities. Different forms of discrimination (conscious or unconscious) among programmers are transferred to technology, which is now vulnerable to inheriting the biases of its designers, which could reinforce the spread of stereotypes, injustice and social bias [26]. What makes matters worse is that studies have shown that AI entities can develop their own biases [27] .

Despite this worrying situation, this algorithmic technology is penetrating justice systems and law enforcement agencies [28]. When we talk about the introduction of AI into the courts of justice, we cannot fail to discuss one of the most prominent services it provides, which is the use of “predictive” algorithms, which generally …… ……..fall under the term “ predictive justice.” By definition, predictive justice means using a set of advanced tools thanks to the processing of a huge amount of legal data that suggests, through the calculation of probabilities, the expected outcome of the dispute resolution [29] .

The use of this technology aims primarily to anticipate the dispute, so that the litigant is the active element in this process, as it gives him an idea of the outcome of the judicial process if it is launched, so he either takes the initiative or refrains from it, and it may also be suitable as a tool for adopting conciliatory solutions, since it allows the parties to take note of the extent of their obligations and concessions, and thus to choose alternative means to resolve the dispute. This service, in the context we are concerned with, does not directly concern the judge himself. However, practical experience has shown the attempt to establish a link between predictable justice and achieving justice in the narrow sense, by “introducing” the use of artificial intelligence into the process of producing the judicial decision itself [30]. However, the manifestations of this use have produced disturbing results.

Judicial decisions that were produced using programs that work with artificial intelligence showed bias against some categories of people[31] The decisions of judges in the United States of America regarding conditional release and the method of implementing the sentence, in which they use the famous Compas program [32] that works with artificial intelligence to measure the criminal risk level of people, by distinguishing those with the greatest likelihood of recidivism, have sparked widespread controversy. In 2016, a survey conducted by the non-governmental organization (ProPublica) [33] concluded that the data used by the Compas algorithm was biased…….

….. and therefore the algorithm is also biased against minorities [34] . Were 44.9% of people of African descent who were classified by the program as being at high risk of recidivism did not actually reoffend in the two years following their release, while the survey showed that of white people classified by the program as being at high risk, 23.5%, 76.5% became recidivists [35] . Voices were raised, even within the judiciary itself, to denounce the use of this program, which would exacerbate “the disparities and unfair and unjustified discrimination that are already prevalent in the judicial system and society as a whole.” [36]. In another study conducted at Darmouth College, and published on December 17, 2017 in the journal Science Advances, professors of computer science Julia Pressel and Hany Farid concluded that the program’s efficiency in predicting recidivism (65.2%) is very close to the rate achieved by people with no experience in the legal or judicial field (67%). Recently, a new study was conducted at Harvard University on the effect of using the COMPAS algorithm on people’s estimation of the risk of recidivism. The study concluded in one of its many conclusions that taking into account the rate of recidivism predicted by the program, the subjects of the sample covered by the study, made them record a higher estimate of the rate of recidivism for black people compared to white people [37].

Even the jurisprudence of American courts has dealt with caution in taking into account the results resulting from this program. The issue was raised before the Supreme Court of the State of Wisconsin in 2016 on the occasion of the case State v. Loomis [38] where, although the court rejected the appeal submitted by The accused considered that the judicial decision was unique, but it indicated that the results of the algorithm were not taken into account alone in building the decision, and because the court has the necessary discretionary power to not adopt the results resulting from the program if it considers them inappropriate. The court considered that judges should be cautious when considering the results of the algorithm.

In addition, the Supreme Court’s decision specified five points that must be present for the result to be valid, [39] thus confirming a skeptical position on the accuracy of the algorithm’s results, as well as questioning the way this program deals with minorities [40].

In the same context, the European General Data Protection Regulation (GDPR) included the right not to be the subject of a decision “taken on the basis of purely automated procedures” [41].

Therefore, it is time to act, and it is time for us to move, as smart machines have become capable of learning from the dark side of our human nature. Either we adjust our sails towards the noble shore, or we all drown in a long night.

This study does not aspire to conduct a survey on the ethical standards that should be applied in this field. Today, we have a wide range of organizations and initiatives that draft theses, formulate policies, propose guidelines and laws, and do not stop conducting research on the ethics of technologies [42]. In this context, the European Charter for the Use of Artificial Intelligence in Judicial Systems and Their Environment included five basic principles that it set as a roadmap. Based on the fourth principle related to transparency, neutrality and intellectual integrity, access to the methodology by which the data was processed and made understandable must be made available.

The fifth principle focuses on empowering the user, which requires, according to the Charter, that the user be an enlightened element and capable of using his tools, as individuals working in the justice system must always be able to refer to the decisions they have made and the legal data they have used for this purpose, and to make new decisions with a different approach given the specificity of the facts at hand.

In light of this, judges are called upon not to postpone the task of applying the ethical vision, but rather to take the initiative to acquire what enables them to supervise the management of the application of artificial intelligence in the justice sector responsibly. Based on his being the element directly concerned with introducing systems related to artificial intelligence into his judicial work, the judge’s learning about artificial intelligence systems and training in their ethics is likely to impose him as a fundamental player and an indispensable decision-maker in forums for researching the ethics of artificial intelligence, and to delegitimize forums that do not include his participation, and to undermine confidence in public policy decisions that exclude him. In addition, it justifies, and even imposes, his opinion on the issue of evaluating the use of these systems in the field of justice throughout their life.

The judiciary should have adequate representation in monitoring and evaluating programs and mechanisms related to the ethics of artificial intelligence in a credible manner, by participating in developing quantitative and qualitative approaches. In addition, this type of knowledge would enable the judge to make informed decisions in several areas, including, for example, deciding to use artificial intelligence systems in a specific area and excluding them from other areas, and justifying the choice of one method over another on the basis of suitability for the purpose or legitimate goal to be achieved, after conducting a thorough and standard assessment between the necessity of use and the potential risks. We must not forget the need to ensure that the necessary procedures are in place to ensure the obligation to submit accounts, determine responsibility for harmful acts resulting from the work of these programs, ensure that an actual possibility of challenging their application and the right to appear before the judge is secured for this purpose, and ensure the requirements of transparency and their application in a manner consistent with the specific nature of the context in which the aforementioned programs are used.

In addition to that, the judge is entrenched in his awareness of his final decision-making authority, so that the final decision remains in his hands, and no matter how mature these systems become in the future, in cases involving certain fateful decisions, he seeks to determine them.

More realistically, and based on the awareness that the intervention of artificial intelligence in the field of justice by processing legal data and judicial precedents could lead to disturbing results, the judge’s education about this technology would enable him to participate effectively to prevent this by establishing what is called “ethics from the beginning or ethical-by-design.”

Thus, learning would enable him to supervise and manage the introduction of principles and rights that may not be violated from the stage of designing and teaching artificial intelligence to ensure a use that respects the fundamental rights of individuals, groups, the environment, the rule of law and institutions, especially for discriminated against groups and people in a vulnerable situation. His education would enhance his knowledge that the use of this technology could reproduce forms of discrimination and bias practices, based on the use of artificial intelligence of some sensitive data (donnée sensible), race, gender, sexual orientation, political opinions, religious and philosophical beliefs, health and medical information, etc. In the design stage and the stage of use, the judge’s conscious and corrective intervention could put an end to the risks of using this data or prevent their consolidation And its perpetuation.

2. Cautions of Using Artificial Intelligence in the Judicial Decision-Making Phase

While the main concerns may involve the dominance of artificial intelligence over the human element, the more immediate and pressing risks are the implications of how the work that this technology is involved in is practiced. Hence, optimism about introducing artificial intelligence systems into the justice system is expected to achieve results that are hoped for (paragraph one), but the dangers accompanying the expected results must always be kept in mind (paragraph two).

A. Expected results from the use of artificial intelligence in judicial decisions

In reality, what concerns litigants is not the abstract legal orders that are formulated in a general and impersonal manner, but rather what the judge may decide individually regarding his personal case. Hence, legal industry entities [43]. Legaltech have worked hard to think about finding tools that can predict the outcome of trials in advance, which is one of the applications of predictive justice. The subject is not new, but today it is taking on an increasing scale. In 1963, foundations were reached for digital processing of file data in order to try to predict their acceptance or rejection before the judiciary [44]. Those working on designing this computer program considered that understanding the methods of interpreting reality and law that judges rely on would lead us to constances, and thus predict the outcome of the case. In the twentieth century, efforts in this field intensified, and some sought to build mathematical models, and others based on probabilities [45] Probabilités or relationships [46]. Correlations through which the outcome of a judicial decision can be predicted.

In addition, British and American researchers conducted a study published on October 24, 2016, related to designing an artificial intelligence system that would predict the outcome associated with a judicial decision. The study covered about 600 cases decided by the European Court of Human Rights, and the result proposed by the program was limited to predicting the acceptance or rejection of the case by the aforementioned court. The result reached by the algorithm used matched (79%) with the outcome of the decisions under study. The work of the algorithm was based on discovering textual tendencies Tendances textuelles that lead to conclusions that can be predicted within the framework of a violation or non-violation of the European Convention of human rights

The relevant facts, the legal arguments of the parties regarding them, and the legal materials applied (as data) are likely to produce, in general, similar decisions [47].

In this context, the actual use of artificial intelligence today in various justice contexts is likely to raise profound questions about the process of judicial decision-making in the age of artificial intelligence, and the impact of that answer on the end user of this intelligence, i.e. the litigant.

The question today revolves around what benefit artificial intelligence can provide and the positive impact expected from it on the final result of the work of the judicial system? [48] And is it possible, and is it even correct, for information technology to become a real tool that participates with the judge in the process of producing the judicial decision itself?

What some expect from introducing artificial intelligence into the justice system is the achievement of legal security La sécurité juridique, as part of the aspiration to rationalize the judicial decision by seeking a kind of unification in this framework [49].

This enthusiasm is basically met by a group of legal experts’ denial of the specificity of legal security, to be in itself a goal we aspire to, since how can the law not be synonymous with achieving security? [50]

However, judicial decisions tend to be variable decisions for reasons including the objective ones related to the necessity for the judge to adapt the legal rule to the actual situation at hand [51] and the personal ones related to variable data affecting the judge himself [52].

The danger of this judicial randomness (alea judiciaire) remains curbed by internal controls that govern judicial work, especially the existence of governing bodies whose work is based on collective deliberation that prevents the monopoly of opinion and decision, in addition to the formation of jurisprudential trends that unify the interpretation of texts and the setting of standards by the supreme courts. For further guarantees, technological optimists are counting on artificial intelligence to undertake the aforementioned unification function to instill reassurance in the souls of those who believe in the necessity of justice that provides a unified judicial solution, which falls under the scope of credibility based on equality between litigants and certainty.

Proponents of employing these technologies in judicial work praise their ability to “provide access to justice for all and equality before the law, and the stability, harmony and consistency of jurisprudence [53], in addition to ensuring “legal security, legal expectation and confidence in the judicial system in general [54], which guarantees a more logical judicial decision, and ends with neutral and fair justice. But before accepting the encouragement of This adoption, this technical fever must be approached with caution and care.

To understand the expected positives of artificial intelligence systems, it is necessary to link the nature of the judge’s work in issuing a judicial decision, and the nature of the tasks that can be undertaken by artificial intelligence technologies that can be called upon to support him, based on the criterion of the extent to which the result of the relationship between the facts presented and the rules applicable to them can be predicted. The judge is obliged to adapt his decision according to the facts of the case, which requires that every judicial ruling be reasoned [55], ensuring that the facts of the case have been examined, scrutinized, and taken into account in all their details, in order to resolve the dispute in accordance with the legal rules applicable to it.

In the stage of preparing and preparing the judicial decision, and in cases that require the judge to work hard to process a large amount of information and study various legal materials, in addition to focusing on analyzing many comments on previous judicial decisions, automated unified nomenclature classification systems can be useful [56].

In addition, techniques based on saving and retrieving information can provide the judge with information with a structure that is applicable to the actual case in question, as does the eDiscovery technology used in the United States and Britain. Artificial intelligence can also, by providing organized and useful information to the judge, help him build logical thinking and reach conclusions, and thus provide guidance or advice. Hence, the importance of artificial intelligence in relieving a judge who is overburdened with work, and relieving him by making some tasks automated. The role of artificial intelligence is focused on dealing with matters that are relatively easy to handle – even if they require a lot of time, which constitutes an aspect of complexity in them and those governed by pattern and repetition. The expected solution given by artificial intelligence imposes itself basically easily in these cases where the judge’s discretionary power is diminished, as long as these cases are based on clear grounds and stable solutions.

As for the process of building the judicial decision itself, and before examining the benefits that the introduction of artificial intelligence into judicial work can provide, it is necessary to touch on the nature of the judge’s work that leads him to issue the judicial decision. The judge decides the dispute according to the legal rules that apply to him [57]. In this context, he deals initially with information, and processes the events presented to him in the historical sense, to extract from them the useful facts, and the position of the parties to the dispute on them. In this, his work is identical to the way the algorithm works, which deals with information and data. As for the outputs resulting from the algorithm, the striking differences between the judge’s task and the work of the algorithm emerge. The scholar Carbonnier drew attention to this reality, categorically denying that the judge is a machine, because he works on the case , with knowledge and logic, in addition to his sensitivity and intuition [58], considering that the artificial intelligence systems in circulation today are of the weak type Narrow AI, and not of the strong, aware and conscious type General PAI. The judge sets the legal framework for the case, and applies the legal rules in light of ethical and humane considerations [59], applying legal thinking, until reaching a solution. The judge does not merely decide the case, but rather there is an underlying and implicit intellectual process that takes its course and cannot be reduced to logical reasoning alone. At the same time, it constitutes a guarantee for the litigant, considering that the elements of his factual and legal case have been examined and treated in a unique, personal manner. The judicial ruling involves an interpretation of the law, which is knowing “How the judge obtains the individual, personal rule that he will apply, based on applying the abstract rule in its general form to a specific material fact.” [60] As for the judge-robot, he is a judge of pure logic, not a judge of experience and expertise. Although he is capable of anticipating the solution as a result, he is unable [61] to interpret it and reveal and express the cognitive process that led to the solution as a path, far from the “open architecture” of legal logic [62] which, although it is a deductive logic, is not purely deductive, but rather an inductive and pragmatic logic.

From here, the dividing line between the absolute welcome of the use of artificial intelligence in the justice system and the cautious approach becomes clear.

B. The impact of artificial intelligence in the process of producing the judicial decision

The use of artificial intelligence in the process of producing the judicial decision leads us gradually to a change in its logic [63]. The judge is no longer, in fact, ruling, but rather goes directly to the most frequent solution, in what is known today as the solutionism pousse, and that would lead us towards the standardization of justice [64], and the similarity of judicial rulings, based on mathematical calculations, and not on the basis of individualizing each case according to its legal and external data that the judge takes into account. Under the “pressure” of numbers and percentages, there is a danger that the judge will take the “most common” decision, which could lead us in practice to mass trials (Process de masse), in which case the judicial function would be governed by the following duality: either the judge follows the machine’s suggestion, convinced or similar, or even, and here is the most dangerous, trusting this option, or he contradicts the machine’s suggestion, which then becomes, in the eyes of the public, the natural solution [65].

Resorting to the use of artificial intelligence in producing judicial decisions is likely to create the risk of the singularity of interpretation, the crushing of the judge’s freedom of judgment under the rock of the automatic result, and the absence of the judge’s creative role, and threatens a “interpretational winter” governed by stagnation at a specific point, through a repeated reproductive logic. The result given based on the work of artificial intelligence systems turns over time into a new rule, since the rule resulting from the application La norme d’application becomes an alternative to the legal rule that the text originally brought [66]. These results refer to another problem that finds its place in countries that follow the Roman-Germanic legal system, and revolves around the power and value that interpretation acquires. Is the systematic application of previous solutions imposed, in reality, by adopting the result announced by artificial intelligence, reinforced by the weight of numbers, likely to confer on the established interpretation the same value that the precedents of interpretation acquire in public law systems?

Questions This shift in understanding the nature of a judicial decision as a result of adopting artificial intelligence systems raises profound questions. What is the process of defining ijtihad in general? What is the value of the “legal rule” (with an applied source) extracted from the use of artificial intelligence based on a group of similar decisions in a specific subject? What are, in principle, similar decisions with similar standards? Does this extracted rule add to the legal text in its general form, thus becoming one of its sources?

And in case the judge decides to apply a different legal rule than that resulting from the outputs of artificial intelligence, will he be asked for additional justification for departing from this trend, and to explain the departure from this “norm” that was established by a digital means that may be biased or designed without external oversight by private parties? What if the judge wanted to deviate from the jurisprudential trend that he had previously followed during his judicial career, and artificial intelligence systems also extracted it as a “normative” rule for previous judicial decisions issued by the same judge? [68]. Will this departure become a reason to submit requests for recusal against judges due to legitimate suspicion? These are a range of open questions that we should take our time to ask today, and to inquire about how artificial intelligence systems can be introduced into the justice system, where this choice might lead us, and what caveats we should avoid.

3. Conclusion

The famous physicist Stephen Hawking considered that “the success in creating artificial intelligence will be the greatest event in human history, and unfortunately it may also be the last, unless we learn how to avoid the dangers” [69].

In most of the decisions made around the world today, there is an algorithm involved in the subject; from choosing the movie we will watch, or the people we will add to our list of friends, to our electronic purchases, to the candidate we will vote for in the elections, to the person who will be searched, to who will be released after being arrested, and to the length of the sentence that the criminal will serve .

The age of artificial intelligence is undeniably imminent in the justice sector, but the questions far outnumber the facts. So instead of developing the tools to save us from the worst in us, we are faced with a complex and frightening problem, with most of the humans who created AI not knowing what it will learn next. While developers usually understand how to build AI, understanding how these systems work, process information, and arrive at their conclusions is largely unknown at present.

We have had more than forty years to adapt to the information age; we will not have the same time to adapt to the age of intelligent machines. If we have not yet been able to agree on the legislation, rules, and values required to guide and control it, then at least let us take the initiative to build the capabilities that will allow us to reserve our role in reviewing our past responses to historical clashes between technology and ethics, to have a competing voice to ask the big questions, and to expand the circle of discussion and decision-making to include us.

All this is with the aim of providing sufficient guarantees for these systems to operate under the umbrella of basic human rights, the rule of law and the independence of the judiciary, and ensuring that they are prepared, developed and used in a way that makes their “trustworthiness” the ultimate result that must be established by putting these principles into practice [70].

References

  1. The fifth industrial revolution is based on the idea of integrating robots with human performance. But the “Fourth Industrial Revolution” is the name given by the World Economic Forum in Davos, Switzerland, in 2016 to what was supposed to be the last link in a series of industrial revolutions, based on creative digitization based on a combination of technical breakthroughs interacting symbiotically through innovative algorithms.
  2. Coleman, F. (2020). A human algorithm: How Artificial Intelligence is redefining who we are. Catapult.
  3. UNESCO, C. (2021). Recommendation on the ethics of artificial intelligence.
  4. For more on these benefits, see R. Darghawi, Artificial Intelligence as an Alternative Solution to Combat Future Epidemic Shocks (Corona Virus as a Model), Algerian Journal of Legal and Political Sciences, Volume 58, Issue 2, 2021, pp. 496 and beyond.
  5. For more information: D. Owana, Fondements logiques de l’intelligence artificielle, Copyright Dieu-donné OWANA Paris, 2015, p. 289

  6. Ertel, W. (2018). Introduction to artificial intelligence. Springer.
  7. Kaplan, A., & Haenlein, M. (2019). Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business horizons, 62(1), 15-25.
  8. K. Al-Bazouni, The Impact of Artificial Intelligence on the Theory of Right, Modern Book Foundation, Beirut, 2023, p. 27.
  9. Algorithms mean a set of steps that fully describe how to implement an operation. The name of the algorithm varies from one field to another. In cooking, it is the recipe, in music, it is the note, and in computer science, the program performs the same task. The French High Council for Audiovisual Communication has defined it as a series of clearly defined operations or instructions to allow a problem to be solved or a specific result to be obtained, available on the Council’s website : www.csa.fr/Informer/Toutes-les-actualites/Actualites/Terminologies- autour-des-algorithmes-de-recommandation-des-plateformes-de-contenus-numeriques

  10. Karuppannan, I. (2018). Malaysia and Lebanon, 1963-2009: Small State Bilateral Relations (Doctoral dissertation, University of Malaya (Malaysia)).
  11. Miao, F., Holmes, W., Huang, R., & Zhang, H. (2021). AI and education: A guidance for policymakers. Unesco Publishing.
  12. L’article 3 de la proposition du règlement européen sur l’intelligence artificielle, publiée le 21 avril 2021 (AI Act) définit le «système d’intelligence artificielle» (système d’IA) comme étant « un logiciel qui est développé au moyen d’une ou plusieurs des techniques et approches énumérées à l’annexe I et qui peut, pour un ensemble donné d’objectifs définis par l’homme, générer des résultats tels que des contenus, des prédictions, des recommandations ou des décisions influençant les environnements avec lesquels il interagit ».
  13. Council of Europe, Committee on Artificial intelligence (CAI), Revised Zero Draft (Framework) Convention on Artificial intelligence, Human Rights, Democracy and the Rule of Law, 6 janv.2023: www.coe.int/cai
  14. EUROPEIA, U. (2017). European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103-INL). Estrasburgo: Parlamento Europeu.
  15. Barthe, E. (2019). Les outils de l’intelligence artificielle pour le droit français. La Semaine Juridique, (14), 665-674.

  16. It is noteworthy that all definitions aim to model human intelligence, meaning that they refer to the activity of human intelligence as a reference for creating artificial intelligence, assuming that it is related to incorporating the intelligence that humans have in some machines. However, experience in the field of artificial intelligence has shown that the set of methods that humans follow in using their intelligence to solve problems are certainly not the only methods available, nor are they always the best.
  17. Algorithmes de recommandations
  18. World Health Organization. (2016). World Health Statistics 2016 [OP]: Monitoring Health for the Sustainable Development Goals (SDGs). World Health Organization.
  19. It is worth noting that Estonia is one of the highest-ranked countries in the United Nations’ E-Government and Development Index for 2018 and 2020. UN’s E-Government Development Index.

  20. E. Niler, Can AI be a fair judge in Court? Estonia thinks so, Wired, 25 mars 2019.
  21. Serverin, E., Perez, B. M., & Cottin, M. (2021). La Nomenclature des affaires orientées dans les chambres civiles de la Cour de cassation (NAO): l’élaboration collective d’un outil de connaissance et d’action (Doctoral dissertation, Cour de cassation).
  22. Eléments permettant la réidentification des parties.
  23. Regarding all of these experiences, see Judge Jean-Michel Sommer’s intervention during the symposium held by the French Court of Cassation on April 21, 2022 entitled “Intelligence artificielle et la fonction de juger,” available on the court’s website www.courdecassation.fr
  24. Le nouveau traitement des pourvois à la Cour de cassation, Dallaz Actualité, 26 octobre 2021 : “ Pour les dossiers enregistrés après le 1er septembre 2020, les pourvois empruntent l’un des trois circuits suivants : le circuit court qui permet de traiter rapidement les pourvois dont la solution juridique s’impose ; le circuit approfondi, qui concerne les affaires posant une question de droit nouvelle ayant un impact important pour les juridictions du fond ou susceptibles d’entraîner un revirement Le nouveau traitement des pourvois à la Cour de cassation, Dallaz Actualité, 26 octobre 2021 : “ Pour les dossiers enregistrés après le 1er septembre 2020, les pourvois empruntent l’un des trois circuits suivants : le circuit court qui permet de traiter rapidement les pourvois dont la solution juridique s’impose ; le circuit approfondi, qui concerne les affaires posant une question de droit nouvelle ayant un impact important pour les juridictions du fond ou susceptibles d’entraîner un revirement de jurisprudence et le circuit intermédiaire qui reçoit toutes les affaires ne relevant ni du circuit court ni du circuit approfondi ».
  25. Tell me more staff, Light and Dark: The Racial Biasis that remain in Photography, NPR, April 16, 2014, www.npr.org.
  26. Buolamwini, J. (2018). When the robot doesn’t see dark skin. The New York Times, 21, 2018.
  27. Could AI robots develop prejudice on their own?, Cardiff University, Sept.6, 2018, www.sciencedaily.com
  28. Police departments in Chicago, New Orleans and the United Kingdom use artificial intelligence to create lists of potential criminals, using predictive organization to anticipate future crimes, based on criteria fed into those machines, such as living in poor neighborhoods, or having a distant connection to a criminal, learned that facial recognition devices fail to correctly identify people with dark skin. A. Breland, How white engineers built racist code and why it’s dangerous for black people, Guardian, Dec.4, 2017, www.theguardian.com
  29. La justice prédictive est définie comme étant « un ensemble d’instruments développés grâce à l’analyse d’une grande masse de données de justice qui proposent, notamment à partir d’un calcul de probabilités, de prévoir autant qu’il est possible l’issue du litige » : B. Donder, Justice prédictive : la fin de l’aléa judiciaire ?, Dalloz, 2017, p.532.
  30. It has already been used by some appeal courts in France: Rapport du Sénat, Mission d’information sur le redressement de la justice, 4 avr. 2017, cité par Y. Gaudemet, La justice à l’heure des algorithmes, Revue du droit public, n° 3, 2018, p. 655.
  31. J. Angwin, J. Larson, Surya Mattu, and Lauren Kirchner, Machine Biais, ProPublica, May 23, 2016, www.propublica.com
  32. Correctional offender management profiling alternative sanctions.
  33. Larson, J., Angwin, J., & Parris Jr, T. (2016). Breaking the black box: How machines learn to be racist. ProPublica.
  34. In response to this reality, Amnesty International, AccessNow and a number of non-governmental organizations drafted the Toronto Declaration calling for the protection of the right to equality and non-discrimination in systems based on machine learning: Déclaration de Toronto: Protecting the right to equality and non-discrimination in systems based on automatic learning, May 16, 2018
  35. Larson, J., Mattu, S., Kirchner, L., & Angwin, J. (2016). How we analyzed the COMPAS recidivism algorithm. ProPublica, May 23.
  36. See the Attorney General’s statement Eric Holder Cité par A. Fradin, États-Unis : un algorithme qui prédit les récidives lèse les Noirs, Rue 89, 24 mai 2016, www.rue89.nouvelobs.com
  37. Vaccaro, M. A. (2019). Algorithms in human decision-making: A case study with the COMPAS risk assessment software (Doctoral dissertation).
  38. State v. Loomis – 2016 WI 68, 371 Wis. 2d 235, 881 N.W.2d 749.
  39. First, the “proprietary nature of COMPAS” prevents the disclosure of how risk scores are calculated; second, COMPAS scores are unable to identify specific high-risk individuals because these scores rely on group data; third, although COMPAS relies on a national data sample, there has been “no crossvalidation study for a Wisconsin population”; fourth, studies “have raised questions about whether [COMPAS scores] disproportionately classify minority offenders as having a higher risk of recidivism”; and fifth, COMPAS was developed specifically to assist the Department of Corrections in making post-sentencing determinations.
  40. HLR (Harvard Law Review). (2017). State v. Loomis: Wisconsin Supreme Court Requires Warning Before Use of Algorithmic Risk Assessments in Sentencing. Harvard Law Review, 130, 1530-1537.
  41. The right not to be subject to a decision based solely on automated processing”: C. of the E. Union, Regulation of the European Parliament and of the Council on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and repealing Directive 95/46/EC (General Data Protection Regulation), 2016.
  42. Principles, A. I. (2017). Future of life institute. URL: futureoflife. org/openletter/ai-principles.(Accessed 11.04. 2023).
  43. For definition : Qu’est-ce qu’une legal tech ?, Dalloz Actualités, 29 sept.2017 : « Ces legaltechs ont vu le jour aux ÉtatsUnies avec l’apparition de Rocketlawyer et Legalzoom qui sont venues bouleverser les pratiques traditionnelles des pratiques traditionnelles en faisant usage de la technologie et de logiciels performants pour offrir une large palette de services juridiques aux internautes grâce à des algorithmes de génération documentaire.
  44. Lawlor, R. C. (1963). What computers can do: Analysis and prediction of judicial decisions. American Bar Association Journal, 337-344.
  45. Segal, J. A. (1984). Predicting Supreme Court cases probabilistically: The search and seizure cases, 1962-1981. American Political Science Review, 78(4), 891-900.
  46. Nagel, S. S. (1963). Applying correlation analysis to case prediction. Tex. L. Rev., 42, 1006.
  47. Barraud, B. (2017). Un algorithme capable de prédire les décisions des juges: vers une robotisation de la justice?. Les Cahiers de la justice, 1(1), 121-139.
  48. It is worth noting that the Predictive et Case Law Analytics system, which operates with artificial intelligence, is the most widely used program in the process of producing judicial decisions. It includes a natural language search engine, filters, suggestions for solutions similar to the case at hand, a statistical analysis of the case to calculate the amount of potential compensation, and guidance on the elements of fact and law most influential in previous jurisprudence: see: 671. E. Barthe, Les outils de l’intelligence artificielle pour le droit français, JCP G, n° 14, 2019, p. It is worth noting that the use of this system by the French courts of appeal in Douai and Rennes has yielded unfavorable results. -X. Rosnin, This logic of predictive justice does not bring us more value, interview by Soazig Le Nevé, November 27, 2017, cited by O.Onana, Mythes and realities of the intelligence artifice and of predictive justice, Village-Justice.com There is another system that works in artificial intelligence, which is Case Law Analytics -P. Allemand, Case Law Analytics: mathematics au service du droit, Carrières-juridiques.com, juin 2018.
  49. Phénomène d’uniformisation.
  50. La formule sonne en effet comme une sorte de redondance, tant il paraît évident qu’un droit : voir cela dans qui n’assurerait pas la sécurité des relations qu’il régit cesserait d’en être un. Imagine-t-on un droit qui organiserait l’insécurité, ou même qui la rendrait possible ? » : J. Boulouis, « Quelques observations à propos de la sécurité juridique », in Du droit international au droit de l’intégration. Liber amicorum : Pierre Pescatore, Nomos Verlag, 1987, p. 53, cité par J.-G. Huglo, Dossier : Le principe de sécurité juridique, Cah. Cons. const. 2001, n° 11.
  51. This topic falls within the framework of the principle that prohibits the judge from formulating his rulings in the form of regulations (Article 3 of the Lebanese Code of Civil Procedure, corresponding to Article 5 of the French Code of Civil Procedure), in addition to the relative effect of the rulings, as Article 303 of the Lebanese Code of Civil Procedure states the following: “Final rulings are binding on the rights they decide (…). However, these rulings do not have this binding force except in a dispute that arose between the same parties without changing their characteristics and dealing with the same subject and cause”, corresponding to the same meaning of Article 1355 of the French Code of Civil Procedure.
  52. On the variables that govern the judge’s decision: -Doss. Des juges sous influence, Cah. just. 2015. 501 s
  53. J.-M. Sauvé, in Ordre des avocats au Conseil d’Etat et à la Cour de cassation, La justice prédictive, Actes du colloque organisé par l’Ordre des avocats au Conseil d’État et à la Cour de cassation à l’occasion de son bicentenaire, 2018, Dalloz.
  54. La sécurité juridique et la prévisibilité du droit : D. Reiling, Quelle place de l’intelligence artificielle dans le processus de décision du juge, Les cahiers de la justice, éd. Dalloz, 2019 (2), pp.221-228.
  55. Pécaut-Rivolier, L. Regards croisés d’une juriste et d’un mathématicien. Ce dossier, proposé par Laurence Pécaut-Rivolier, conseillère à la Cour de cassation, et Stéphane Robin, directeur de recherche à l’INRA, a été séparé en trois épisodes.
  56. Les systèmes de nomenclature uniforme automatises comme ECLI ( European Case Law Identifier ).
  57. As stated in the literal text of Articles 369 and 12 of the Code of Civil Procedure in Lebanon and France, respectively.
  58. Carbonnier, J. (2004). Droit civil vol. I: Introduction. Les personnes. La famille, les enfants, le couple», Paris, Quadrige/PUF.
  59. Karl Llewellyn, one of the most prominent supporters of the American realist school, previously pointed out a range of intangible considerations that enter into judicial decision-making. “The droit is an ideology and a group of people who talk and think about it, which is large, non-verbal, large, and implicit”: My Philosophy of Law, Boston Law Co., 1941, p. 183. C. Perelman, Le raisonnement juridique, Les études philosophiques, n° 2, 1965, p. 140.
  60. On the meaning of the interpretation of the law by the judge: -H. Kelsen, Theorie pure droit, trad. H. Thevenaz, Neuchatel, ed. La Baconnière, 2° ed., review and mise à jour, 1988, p.140.
  61. Bourcier, D. (1995). La décision artificielle. FeniXX.
  62. Translation of the phrase: “texture ouverte du droit” -H.L.A Hart, Le concept de droit, trad. M. van de Kerchove, Publications des facultés universitaires Saint-Louis
  63. CEPEJ, Charte européenne de l’utilisation de l’intelligence artificielle dans les systèmes judiciaires et leur environnement, 3-4 dec.2018
  64. Garapon, A. (2017). Les enjeux de la justice prédictive. La semaine juridique, (1), 47-52.
  65. M. Guyomar, Le point de vue du juge, in Ordre des avocats au Conseil d’État …, La justice prédictive, opt.cit., p.99
  66. Garapon, A. (2017). Les enjeux de la justice prédictive. La semaine juridique, (1), 47-52.
  67. For example, in Spain, where the VioGen artificial intelligence program is now used to take criminal action in cases of domestic violence, based on the expected rate of recidivism determined by the system, judges have begun to rely on these rates systematically and are “comfortable” with using them, considering that if their decision is incorrect, they can attribute the error to the result suggested by the machine, in addition to their fear of the public reaction, if they do not rely on the machine’s suggestion if it later turns out that the criminal has assaulted his wife again. In this regard, see the intervention of Professor Gasco Inchausti of the University of Madrid .. during the symposium held by the French Court of Cassation on April 21, 2022 entitled “Artificial intelligence and the function of judge”, available on the court’s website: www.courdecassation.fr
  68. Note that the analysis of data and information in order to predict a judge’s action or decision (known as profilage du juge) has become a crime in France under the law of March 23, 2019. Note that the analysis of data and information in order to predict a judge’s action or decision (known as profilage du juge) has become a crime in France under the law of March 23, 2019. L’article L. 111-13 du Code de l’Organisation judiciaire dispose que : « Les données d’identité des magistrats et des membres du greffe ne peuvent faire l’objet d’une réutilisation ayant pour objet ou pour effet d’évaluer, d’analyser, de comparer ou de prédire leurs pratiques professionnelles réelles ou supposées. La violation de cette interdiction est punie des peines prévues aux articles 226-18,226-24 et 226-31 du code pénal, sans préjudice des mesures et sanctions prévues par la loi n° 78-17 du 6 janvier 1978 relative à l’informatique, aux fichiers et aux libertés ».
  69. Hawking, S. (2014). „Transcendence Looks at the Implications of Artificial Intelligence-but Are We Taking AI Seriously Enough?” The Independent, 1 mai 2014, sec. News.
  70. UNESCO, D. (2021). Preliminary report on the first draft of the recommendation on the ethics of artificial intelligence. United Nations Educational, Scientific and Cultural Organization.
Related Articles
Cansu Aykut Kolay1, İsmail Hakkı Mirici2
1Hacettepe University Graduate School of Educational Sciences, Ankara, Turkey.
2Hacettepe University, Faculty of Education, Ankara, Turkey.
Shatha M. AlHosian1
1College of Business Adminisrtation, King Saud University, Saudi Arabia.
Mustafa N. Mnati1, Ahmed Salih Al-Khaleefa2, Mohammed Ahmed Jubair3, Rasha Abed Hussein4
1Department of electrical engineering, Faculty of Engineering, University of Misan, Misan, Iraq.
2Department of Physics, Faculty of Education, University of Misan, Misan, Iraq.
3Department of Computer Technical Engineering, College of Information Technology, Imam Ja’afar Al-Sadiq University, Iraq.
4Department Of Dentistry, Almanara University for Medical Science, Iraq.
Samirah Dunakhir, Mukhammad Idrus1
1Faculty of Economics and Business, Universitas Negeri Makassar, Indonesia.

Citation

Murtada Abdalla kheiri. Protecting Judicial Decisions in the Age of Artificial Intelligence A New Challenge to the Rule of Law[J], Archives Des Sciences, Volume 74 , Issue 6, 2024. -. DOI: https://doi.org/10.62227/as/74604.