The use of artificial intelligence (AI) in the judicial sphere has ceased to be a theoretical hypothesis to become a tangible reality in Mexico. A recent ruling by the Second Collegiate Civil Court of the Second Circuit, from which isolated thesis II.2o.C.9 K (11a.) derives, establishes for the first time the ethical and methodological parameters for its application in judicial proceedings. Thus, the court not only used AI tools in its performance but also defined a standard for their responsible use, with a focus on human rights, marking a milestone in the administration of justice.
In complaint proceeding 212/2025, the court reviewed an order issued by a District Court that had set, without sufficient reasoning, a bond of fifty thousand pesos for each property folio to allow the registration of the amparo claim. The issue centered on determining whether the amount had been duly founded and reasoned, and—if not—how it should be set according to objective and verifiable criteria.
The ruling anchors the calculation of the bond in three jurisprudential pillars. First, it requires a “sufficient” bond prior to the preventive registration of the amparo in the Public Registry of Property, for possible damages and losses to third parties. Second, it incorporates a methodology to quantify damages (loss of purchasing power) using the National Consumer Price Index and losses (opportunity cost) using the 28-day TIIE, prorated according to the probable duration of the process. Third, it adopts the criterion guiding the estimation—based on judicial statistics—of the tentative duration of amparo proceedings in both instances.
Building on that framework, the court explains and executes a transparent methodology: it uses the cadastral value of the properties as an objective basis, consults official inflation rates, calculates the expected duration of the trial from indicators of the Federal Judiciary Council, prorates the factors by months, and breaks down the result into damages and losses per lot, with an auditable results table. The range obtained for both lots was approximately between $59,800 and $64,600; weighing that range, the bond was set at $60,000 for reasons of sufficiency, proportionality, and operability.
The innovative contribution lies in the fact that the court details the how of the calculation and legitimizes AI as an auxiliary for numerical reasoning. The tool does not replace judicial judgment: it merely performs arithmetic operations under predefined normative and jurisprudential parameters, with control, traceability, and verifiability.
Derived from complaint 212/2025, the isolated thesis of the Second Collegiate Civil Court of the Second Circuit establishes the minimum principles that judges must observe to use AI in the judicial sphere, with a human-rights perspective. The standard rests on four pillars:
a) Proportionality and harmlessness: AI may only be used when strictly necessary, without invading legal reasoning.
b) Protection of personal data: Any use must safeguard the confidential information contained in case files.
c) Transparency and explainability: Judges must explain how the tool was used, the data employed, and the methodology applied.
d) Human supervision and decision-making: Technology is an auxiliary, not a substitute; the judicial decision remains human.
The thesis relies on international references such as the Ethical Guidelines for Trustworthy Artificial Intelligence of the High-Level Expert Group on Artificial Intelligence created by the European Commission, the European Artificial Intelligence Act, and UNESCO’s Recommendation on the Ethics of Artificial Intelligence—and adapts them as best practices for the digital administration of justice. The result is a robust local guideline, compatible with comparative standards, that provides legitimacy and control over the use of AI in judicial settings.
The precedent issued by the Second Collegiate Civil Court of the Second Circuit marks a turning point in the relationship between AI and justice in Mexico. For the first time, a judicial body expressly endorses the use of AI tools in judicial processes, recognizing their potential to improve the efficiency of the system but also the risks involved in applying them in an area as delicate as dispute resolution and the protection of human rights.
The decision reflects the natural tension between innovation and procedural guarantees: AI can streamline calculations and technical tasks, but it also introduces margins of error, biases, or lack of explainability that, in judicial matters, can translate into direct impacts on individuals and companies. Precisely for this reason, the ruling adopts a self-restraint approach, setting minimum principles to ensure ethical and responsible use.
This precedent not only delineates how judges must act; it also anticipates the ethical standard that will be demanded of the private sector. Even in the absence of specific regulation, companies integrating AI into their operations—whether in data management, automated decisions, or litigation—must observe the same criteria of proportionality, transparency, and human supervision that are now expected from the courts. In other words, the ethics of AI in the judicial sphere may anticipate the standard by which, in the near future, its legitimacy will be evaluated in the business context.
A consistent application of judicial precedent can generate positive effects that transcend the specific case. In practical terms, it translates into greater predictability regarding the outcome of analogous disputes and substantial procedural efficiency, by reducing uncertainties and narrowing the margins of discretion in the resolution of recurring issues. This predictability not only facilitates the planning of procedural strategies by the parties but also optimizes the allocation of judicial resources, focusing efforts on genuinely disputed points and avoiding the repetition of already-settled debates.
On that basis, both direct and indirect benefits can be identified, whose effects reinforce each other cumulatively over time.
The progressive integration of AI tools into the judicial ecosystem—always under parameters of transparency, explainability, and human control—enhances the benefits of precedent. AI can help map decision-making patterns, detect inconsistencies, and propose normalization of criteria, facilitating access to standardized and better-structured procedural information. This allows litigants, advisors, and companies to model legal risk more accurately, estimate outcome scenarios, and calculate provisions or contingencies based on verifiable data and consistent methodologies.
In the long run, greater consistency in the application of precedents, supported by decision-analysis systems, can transform legal risk management from a predominantly qualitative and intuitive approach to one that is more quantitative, reproducible, and auditable. This evolution does not eliminate professional judgment or judicial discretion but situates them within a context of clear rules, structured data, and comparable criteria—reinforcing legal certainty without sacrificing justice in the specific case.
The precedent issued by the Second Collegiate Civil Court of the Second Circuit marks a turning point in the relationship between AI and justice in Mexico. For the first time, a judicial body expressly endorses the use of AI tools in judicial proceedings, recognizing their potential to improve system efficiency but also the risks involved in applying them in a field as delicate as dispute resolution and the protection of human rights.
The decision reflects the natural tension between innovation and procedural guarantees: AI can speed up calculations and technical tasks, but it also introduces margins of error, biases, or lack of explainability that, in judicial matters, may result in direct harm to individuals and companies. For this very reason, the ruling adopts a self-restraint approach, setting minimum principles to ensure ethical and responsible use.
The prudence of the Mexican court contrasts with less fortunate international experiences. In the United States, for example, some local judicial systems incorporated predictive algorithms—such as the COMPAS case in criminal proceedings—without sufficient human oversight. Years later, various studies demonstrated that the results tended to reproduce racial and socioeconomic biases, affecting the impartiality of judgments. These precedents explain why the Mexican thesis adopts a logic of restraint: AI may assist but not decide. In a country with institutional gaps and heavy procedural loads, introducing AI without an ethical framework could amplify inequalities or violate rights rather than correct them.
Even in the absence of specific regulation, companies integrating AI into their operations—whether in data management, automated decisions, or litigation—must observe the same criteria of proportionality, transparency, and human supervision that are now expected of the courts. In other words, the ethics of AI in the judicial sphere may anticipate the standard by which, in the near future, its legitimacy will be assessed in the business realm.
Although the thesis belongs to the judicial field, its rationale foreshadows a cross-cutting standard of algorithmic governance. Companies that integrate AI into critical processes—automated decision-making, data management, scoring, compliance, and dispute resolution—will find in this precedent an early guide to designing internal policies: proportionality in technological deployment, system explainability, data traceability, and effective human supervision. In the litigation arena, the methodology increases predictability regarding procedural costs associated with guarantees and settlements, and it can facilitate legal risk management by standardizing criteria and strengthening the auditability of decisions.
For the private sector, the precedent is not merely a judicial note: it is an early regulatory signal. Companies can use it as a guide to review or strengthen their own artificial intelligence, litigation, and compliance protocols:
The underlying message is clear: AI has arrived in the judicial system, and its responsible adoption will be a differentiating factor both for public institutions and for the companies that interact with them.
Finally, the ruling of the Second Collegiate Civil Court of the Second Circuit represents much more than a technological advancement: it is an institutional watershed.
The act of adopting, recognizing, and regulating AI within the judicial sphere lays the foundation for not falling behind in an increasingly technological world. By establishing clear rules, the Judiciary inaugurates a model of responsible innovation in which technology is subject to law—and not the other way around. This precedent not only grants legitimacy to the use of these tools but also introduces an ethical standard that transcends the courts
Within this framework, the private sector faces both a challenge and a significant opportunity. The incorporation of AI into judicial processes will not only strengthen efficiency and transparency within the legal system but also set a benchmark for companies, which must adapt and align themselves with these new regulations in order to operate ethically and transparently. Thus, the responsible regulation of technology offers a model that, if properly implemented, can be key to developing a fairer and more efficient business environment.
Awards














