Speakers:
Nelson Nkari – Speaker
Moses Kahoro – Speaker
Quency Otieno – Speaker
Karen Rono – Moderator
Key Resources Referenced
Books
Cloud Computing Law – Christopher Millard
Cases
- Laser Insurance Brokers Limited Vs. Nursing Council of Kenya (Application No.72 of 2025 before the Public Procurement Disputes Tribunal) – Example of how AI dependence can be detrimental to one’s case and affect their duty to the court, see para. 132
- State v Loomis – A Wisconsin Supreme Court case on the use of Algorithmic Risk Assessments in Sentencing.
Papers
Algorithmic Impact Assessments under the GDPR: Producing Multi-layered Explanations – Margot E. Kaminski & Gianclaudio Malgieri
Regulation & Policies - African Union Convention on Cyber Security and Personal Data Protection (Malabo Convention)
- CIArb AI Guideline
- OECD AI Principles
- Silicon Valley Arbitration & Mediation Center Guidelines on the Use of Artificial Intelligence
- Universal Guidelines for AI
Introduction
This webinar is part of the State of ADR Webinar Series, focused on the future of the ADR in light of developments in AI and technology. The moderator, Ms. Karen Rono, guided the speakers through rounds of conversation on the current disruptions in ADR, loopholes in its advancement, and its future potential.
Round 1: Setting the stage.
Q1: How is AI currently enhancing or disrupting arbitration?
Nelson began by defining AI as the ability of computer systems to mimic human intelligence and reason as humans, using large data sets to provide the necessary output. He explained that it is largely used for drafting, analysis, and other forms of support, being preferred because of its ability to make data-driven decisions, and gave examples of tools such as Luminance that assist with review and analysis by extracting information from documents, and platforms like SmartODR and Modria that assist with online dispute resolution, and Harvey AI which assists with document drafting and legal reasoning.
He clarified that there are challenges that come with these benefits, including AI’s neutrality, which is largely based on algorithmic learning. The benefits and disadvantages, he said, must be weighed before the incorporation of AI.

Q2: How are states and institutions globally approaching the use of AI in justice systems?
Quency began by stating that it is very fragmented at the moment, and dominated by sectoral players. He then highlighted the divide between intellectual property and AI groups, fuelled by the rumours that AI would eventually take away freedoms such as the capacity to make autonomous decisions. He acknowledged the role AI has played in reducing the turnaround time in corporations and the government, especially. He, however, highlighted the risk in an inability to explain how AI learns, citing a recent concern that the AI Claude is slowly and silently overwriting its code. In conclusion, he briefly highlighted studies that increased dependence on AI reduces critical thinking capacity.

Q3: In the shift to digital arbitration, what are the top data governance or privacy pitfalls arbitrators are overlooking?
Moses Kahoro introduced the aspect of data protection, claiming that even individuals using AI can be considered data controllers and must be careful to have appropriate security measures in place. He referenced the recent CIArb guidelines, which have been essential in guiding how to clarify what the data collected is for and how it can be safeguarded. He then explained that where data is submitted to an AI model that is continuously being trained, the original information fed would affect the results of similar cases presented in future. Moses stressed the need for risk assessments before using an AI model and an analysis of different regulations applicable to the situation at hand.
Round 2: Guardrails – Ethics, Oversight & Responsible Use
Q1: What considerations should arbitrators or parties keep in mind when selecting AI tools, especially around transparency and reliability?
Nelson began by stressing the need for transparency of reasoning behind any decision rendered by decision-supporting AI. He highlighted the need for proper documentation of training data resources and using interpretable models with an explainability layer. He then advised participants to select tools with domain-specific testing, purpose built and optimized for specific tasks. A good way to know whether a tool is good enough for what one needs, he said, is to test it against a known outcome, to confirm whether it is reliable and accurate for the intended purpose.
He also mentioned the need for forensic verifiability of outcomes by third parties and a proper chain of custody. He supported his assertion by explaining algorithmic justice and bias mitigation, which he explained by stating that Ai models trained on historic data are likely to exhibit inherent bias because of the human bias which finds its way into data findings. An example he gave of this was the predictive policing systems in the global North, which are biased due to the discriminatory nature of ancient policing systems. He explained how the unchecked use of AI in ADR could potentially amplify and replicate the biases in training data, despite the impact decisions have on human lives. He called for human oversight of AI processes and solid fairness thresholds to interrogate whether AI systems are biased, and what can be done to resolve such issues. In conclusion, he highlighted the importance of choosing suitable machine learning algorithms, based on their suitability for different tasks.
Q2: What should ethical AI look like in arbitration? Are there specific red lines we must draw in its use?
Quency stated that the easiest way to approach ethics in AI would be to compare and align with other laws. Ethical usage, he said, would start at the onboarding process, where any vendors (advocates, arbitrators, arbitral institutions offering services, etc.) should outline in their letters of engagement that they will rely on AI for certain things, in the format taken by privacy policies, to aid with accountability. He referenced the OECD AI Principles, which help to understand what different terms mean, and clarified that individuals should have a right to refuse their data being processed by an autonomous AI system. The norm, he continued, must be like that followed in data protection, where there is express consent, in the absence of which one must not proceed. He also stressed on the need for a glassbox, instead of blackbox, mechanism.
He, however, mentioned the limitations of Data Protection Impact Assessments (DPIA) in AI, noting that AI can process data beyond the scope of the DPIA. He referenced the Wisconsin Supreme Court case of State v Loomis from the USA and explained that the different layers of information required to be considered are too many to have an individual DPIA for. He then cautioned against excessive regulation which may be a bottleneck for industry growth has been the case in the Canadian jurisdiction.
He touched on liability where harm occurs as a result of AI usage, stressing that the responsibility is largely on the vendor, and encouraged a keen understanding of any AI products without blanket usage. In conclusion, he stated that ethics in AI is still a broad topic and will only become clearer with the increased onboarding of clients and organizations.
Karen Rono countered Quency’s point on liability, arguing that there ought to be collective responsibility on both the vendors and users where harm occurs. She then shifted the conversation to safeguards.

Q3: How do we balance efficiency with the danger of overreliance? What safeguards or ethical practices should be non-negotiable?
Moses Kahoro first referenced the Silicon Valley Mediation and Arbitration Guidelines and CIArb Rules. He stated that all AI models are beneficial but stressed the importance of avoiding over-reliance. He shared a personal experience before the Public Procurement Disputes Tribunal in a sensitive matter (Laser Insurance Brokers Limited Vs. Nursing Council of Kenya), where opposing counsel used AI in drafting his documents and presented non-existent cases birthed from AI hallucination. He reminded participants, especially advocates, that their duty is to the court, and the absence of proper checks by humans who have been trained in the law is dangerous and not in the best interests of the parties they represent. Moses predicted a change in the scope of professional indemnity in the near future due to the rise of AI, and highlighted the difficulty in curating insurance in the cybersecurity and AI spaces, which often leads to a limitation of liability to a certain period, despite the reality that the damage might extend beyond the limitation period.
In explaining the reason behind this common limitation, he referenced Prof. Millard’s book on Cloud Computing Law, from which he drew the inference that it is much easier to draft a standard contract for millions of people than a special contract for an individual. Unfortunately, this position limits the other party in the contract. In conclusion, he advised parties to maintain human supervision as a safeguard.
Round 3: Protection: Confidentiality, Admissibility & Compliance
Q1: Should evidence generated or processed by AI be treated differently in terms of admissibility? What challenges arise?
Nelson emphasized that the most important thing in dealing with such evidence is to have avenues for authentication and verification to ensure nothing is subtly altered, and help adjudicators understand whether or not to allow the evidence. Referencing the Blackbox problem, he urged participants to avoid systems where outputs cannot be explained or verified since these may violate one party’s right to interrogate and cross-examine. He added that biased evidence-interpretting AI systems could not be deemed to be fair, and introduced the idea of transparent chains of custody with enough audit trails to explain data transformation and maintain integrity.
He shared that the challenges in this field are largely related to governance and the absence of legislation to regulate the fast-growing technology. He advised that practice guidelines are a good way to help in the interim. While cautioning on the need to balance innovation and governance, he recommended that we begin with policy, before solid legislation. Lastly, Nelson stressed the need for disclosure obligations where evidence has been generated or passed through AI, and called for explainability standards and chain of custody protocols.
Q2: How should local and global regulatory frameworks evolve to support fair and transparent AI use in arbitration?
Quency first referenced the Silicon Valley Arbitration Rules, which provide boundaries within which to operate in dispute resolution. He then explored the importance of disclosure, where an arbitrator uses AI in the process, since transparency may be an issue raised in challenging an arbitral award. He noted that sectoral guidance notes would be a good step to support the fair use and development of AI, but warned that each case is different and should be treated so. He invited participants to think of what the outcome would be in the event that an arbitral award is wrong.
He raised the question of whether policy or strategy should come first in the regulation development process and advanced the case for strategy, then policy, and regional agreements, before each state takes a different national identity on AI. This strategy-first method, he said, helps to avoid overregulation as has been witnessed with the EU AI Act by ensuring the policies that follow are aligned to the idealized future.
Q3: How can confidentiality be preserved when data is processed by external or cloud-based AI tools, and what best practices are emerging?
Moses shared that the best way to preserve confidentiality is the encryption of the said data. He gave examples of case management portals which ensure encryption before sending out sensitive information. This method, he said, is the most widely used and is yet to find a worthy replacement.

Round 4: Looking Ahead: Localization, Innovation & Human Judgment
Q1: What innovation or practice would you like to see institutionalized in arbitration rules or infrastructure?
Nelson took a threefold approach in answering the question. He first stressed the need for professional bodies to empanel groups of certified AI forensic experts to handle concerns of transparency where they arise. Secondly, he called for a standard protocol for probing algorithmic bias, with weights assigned according to industry standards to ensure sectoral alignment. Lastly, he stressed the need for disclosure and for adjudicators to understand how to weigh the differing uses of AI.
Q2: How to ensure AI used in ADR reflects African contexts, values, and legal systems?
Quency shared that every AI has different frameworks, and the responsibility is on Africans to weave relevant indigenous knowledge and values in the training of AI for the same to reflect in its output. He also recommended an increase in the number of data protection commissioners to have one dealing specifically with cybersecurity.
Q3: What does the ideal digital arbitration platform of the future look like in terms of ethics, security, and usability?
Moses shared that the platform would incorporate privacy by design, making it a core requirement from the developing stage. This would require knowledge of different privacy and data protection laws of multiple jurisdictions that have access to private information, including where servers are hosted.
Noting that AI needs to be beneficial to its users, he stressed the need to upskill and be knowledgeable on different AI tools. He also advised familiarity with relevant regulations such as the Malabo Convention on Cybersecurity to effectively provide solutions for clients. Despite the benefits of AI platforms, he stressed that reasoning and decision writing should remain human because AI is incapable of empathy and a proper analysis of equity.
He continued to share that the best way to train at the workplace today is using AI tools such as projects on ChatGPT and advised that law schools train shift to training students on how to handle AI systems because future hiring processes will likely consider such skills with need for backing evidence. In conclusion, he noted the challenges faced in areas such as transcription since most AI systems do not capture local dialect and commended efforts in countries like Zimbabwe to develop and train AI systems that understand local context and can easily transcribe African languages while calling for the same across the continent.
Nelson stressed the need for upskilling, and Quency added that AI should be used as a way to enhance the arbitration process, not deter it. He referenced the Universal Guidelines on AI and principles such as ‘Do not harm’ and ‘Do not supercede your master’ that should be reinforced in training AI systems. He invited anyone with comments or ideas on legislation that needs amendment as it relates to AI to share them with him for further discussion. He also stressed the need to expand the technology law domain in niche areas such as ICT, Esports, AI and Data Protection.
Conclusion
Moses and Quency engaged in a brief debate over whether we should prioritize regulation or capacity building. Quency acknowledged that Kenya’s approach is largely risk-based as a result of our colonial history. They, however, agreed that development and innovation are more important, as is being witnessed in countries such as Singapore.
Samantha Masengeli, the Young Members Group (YMG) Vice Chair was invited by Ms. Rono to move a vote of thanks. She appreciated panelists for the entertaining and insightful discussions, noting the importance of understanding the field of AI and technology to enhance our practice. She then ended the meeting.
Writen By Samuel Baraka.



