CourtCorrect Terms & policies

AI Safety Policy

AI Safety Policy

1. Introduction

CourtCorrect commits to ensuring the safety, reliability, and ethical integrity of our Artificial Intelligence (AI) systems. This policy articulates our framework for the development, deployment, and governance of AI technologies, upholding standards that mitigate risks such as bias, discrimination, and other concerns noted by regulators such as the Financial Conduct Authority and the Prudential Regulation Authority.

2. Core Principles

Rigorous Evaluation: AI models are to be thoroughly reviewed by experts to verify their authenticity and reliability, with specific measures to address the probabilistic nature and 'hallucination' tendencies of Large Language Models. Precise statistics and methodologies are available to customers on request, in a format that does not undermine intellectual property.


Transparency and Accountability: Decisions influenced by AI systems will be documented and made comprehensible for auditing and review, ensuring adherence to FCA guidance on transparency and consumer protection.


Ethical and Legal Compliance: Our AI systems shall be regularly assessed for biases and discrimination, ensuring compliance with UK laws and FCA guidance on AI usage.

3. Framework and Governance

3.1 Our Large Language Models


Our Large Language Models are used primarily as reasoning tools rather than for processing data to deliver binary answers, like a search engine such as Google. Instead they look at all the information, far more than any one person could analyse, and come to a reasoned conclusion based on this information we give it. This helps complaints handlers come to more informed opinions, by having access to this data analysis.


When we need to create outcomes that are more binary in nature, we build this into our models, requiring the model to only use existing provided data or assumptions, ensuring the data is well grounded in fact.


3.2 Risk Assessment


We'll always keep checking for potential issues and fixing them, taking advice from our management team to make sure we're always following the best industry methods and rules.


3.3 Human Interaction


Tool Utilisation: As part of CourtCorrect training, we ensure that users do not automatically assume automated processes to operate with complete and total accuracy. There are still small margins of error which must always be managed. 


Human Oversight: AI decisions, particularly those affecting customer complaints handling, will always include a ‘human-in-the-loop’ judgement to ensure fairness and accuracy. We ensure the manner in which we gather and retrain our model using data correlates with how a human expert would engage with the same data. 


Transparency: CourtCorrect, has certain user empowerment tools such as the discretion to toggle the applicability of certain data elements in specific contexts.


Policy Enforcement: Management is responsible for setting the standard and ensuring compliance through regular reviews and updates of the AI Safety Policy. 

4. Assessment Methodology

4.1 Interactive Testing


For tasks that demand a high degree of accuracy and nuance, our policy is to initially engage AI for broad-scale evaluations, followed by meticulous verification by our human experts. Our human experts interact with the scenario as they would in a real-life scenario, allowing for dynamic responses and the decision making pathways to be evaluated and analysed alongside the AI. This process not only provides an essential check on the AI’s outputs but also instils a layer of critical human judgement, ensuring the final outcomes are of the highest fidelity and applicability.


4.2 Large Sample Consensus Analysis


In situations where there isn’t a clear right or wrong answer, multiple AI systems and human experts can be used to see where the consensus lies, thereby determining the most likely correct response. Our policy mandates periodic, comprehensive assessments of our AI systems. We leverage large scale evaluations that are intensively curated to yield significant qualitative insights.


4.3 Close Scrutiny Analysis


We also do more intensive scrutiny of individual, randomly selected cases. Firstly we deploy the AI in a live environment with real-time oversight from human experts who can intervene and correct the AI in case of errors or unexpected behaviour. These results are then analysed and reviewed after the event, with alterations made to source code as appropriate.


4.4 Minimising Dataset Biases


Our team's expertise is a cornerstone of our human oversight methodology. With statisticians specialising in sampling and evaluative techniques, our policy is to uphold stringent standards in minimising bias and maintaining scientific rigour throughout the testing process. The team's proficiency ensures that our AI systems are subjected to the most objective and comprehensive evaluations, thus embedding a culture of safety and reliability into the very fabric of our technological solutions.


4.5 Engagement With Customers


At CourtCorrect we pursue a proactive approach to AI Safety, we encourage our customers to share with us any existing historical biases in their previous decisions. This collaboration is vital for us to actively mitigate such biases in our AI algorithms from the outset. By integrating this customer-provided historical data into our review processes, we ensure that our AI systems do not perpetuate past injustices but instead contribute to fairer and more equitable outcomes. This approach underlines our commitment to initiating concrete change from day one, ensuring that our AI solutions are part of a positive transformation towards greater justice and equality in decision-making.

5. Artificial Intelligence Training

CourtCorrect will provide training and upskilling for customer employees adopting our software, whilst more broadly advocating for education and career advancement within the evolving AI landscape through the CourtCorrect Complaints Handling Academy. 


CourtCorrect’s AI Safety Policy integrates a vital component aimed at enhancing the understanding and proficient use of Artificial Intelligence (AI) within our organisation, particularly in complaint handling processes. The establishment of the Complaints Handling Academy embodies our commitment to AI safety and proficiency, ensuring that our staff and customers are well-versed in both the theoretical and practical aspects of AI applications.


5.1 Training Curriculum Overview


The Academy's curriculum is designed to cover a broad spectrum of AI-related topics. It begins with the 'Fundamentals of AI,' providing an in-depth introduction to the core concepts, operational mechanics, and the evolution of AI technology. This foundational knowledge is crucial for understanding the subsequent modules. The course moves into a more specific complaints handling focus, dealing with real life implementation. 


Recognising the importance of a balanced view, the curriculum also includes a module on the 'Benefits of AI Integration,' which enumerates the advantages such as increased efficiency and scalability. This is complemented by a session on 'Limitations and Ethical Considerations,' addressing the challenges and ethical dilemmas posed by AI and guiding staff on how to navigate these responsibly.

The course covers a wide range of topics and is continually updated to stay abreast of developments in technology. 


By leading with a robust safety framework, CourtCorrect aims to influence industry-wide standards in AI safety and ethics.

6. Public Communication and Reporting

Clear, non-technical communication of our AI safety commitment will be maintained, with regular updates provided to the public through our website and newsletters.

7. Commitment to Fairness and Equity

At CourtCorrect, we are acutely aware of the critical importance of addressing and mitigating risks such as racism, bias, and discrimination within our AI systems. While many aspects of this challenge are managed upstream, it is our responsibility to ensure that the application of AI technology within our services operates free from prejudicial influences.


Our approach begins at the data collection stage. We strive to gather datasets from a wide range of diverse sources, ensuring they represent a broad spectrum of demographics. This diversity is crucial in training AI systems that are more equitable and less likely to perpetuate existing biases.


Expert Oversight: In addition to automated checks,  have established robust mechanisms to oversee the integrity of our AI's outputs, including annual re-analysis of existing datasets to ensure that previous injustices are not propagated against certain groups. Details of these mechanisms are proprietary and, while they are crucial to our operational model, they are not publicised to maintain competitive confidentiality. What we can share is that our approach is dynamic and constantly evolving, integrating the latest research and methodologies to strengthen our AI's impartiality and justice.


Training: We invest in ongoing training and awareness programs for our staff involved in AI development and data handling. This training ensures that our team is aware of the potential for bias and equipped with the knowledge to prevent it.


User Input: We have established channels through which users of our AI systems can report concerns or instances of perceived bias. This feedback is taken seriously and forms a crucial part of our continuous improvement process CourtCorrect actively engages with the broader AI and ethics community. We participate in research initiatives with the University of Cambridge and collaborations aimed at developing new methods and technologies to combat AI bias, keeping our approaches aligned with cutting-edge developments in the field. 


Our customers can trust that CourtCorrect is dedicated to delivering AI-assisted outcomes that are as fair and unbiased as possible. We continuously refine our systems to uphold the highest standards of ethical AI usage, ensuring that all measures of protection against racism and bias are deeply embedded within our technology.

8. Continuous Improvement

Internal Audits: Regular internal audits will ensure policy adherence, with updates based on technological advancements and research.

 

Review Metrics: The effectiveness of our AI systems will be measured by their impact on operational efficiency and the welfare of both consumers and employees, measured through interview and survey data. 

9. Final Provisions

This policy will be reviewed annually, or more frequently as required, to remain aligned with the latest FCA guidance, AI advancements, and societal expectations.

Contact Information

Questions regarding this policy or its provisions should be directed to:

CourtCorrect Ltd.

33 Percy Street, W1T 2DF, London

hello@courtcorrect.com

+442078673925