CourtCorrect Terms & policies

AI Safety Policy

AI Safety Policy

1. Introduction

CourtCorrect commits to ensuring the safety, reliability, and ethical integrity of our Artificial Intelligence (AI) systems. This policy articulates our framework for the development, deployment, and governance of AI technologies, upholding standards that mitigate risks such as bias, discrimination, and other concerns noted by regulators such as the Financial Conduct Authority and the Prudential Regulation Authority.

Our governance framework incorporates industry and regulatory best practices to reduce ethical concerns, eliminate ambiguity and promote accountability of our AI systems. By embedding our core ethical principles across the organisation, CourtCorrect aims to continue driving innovation while adhering to the highest standards of safety and reliability. 

The Board of Directors of CourtCorrect Ltd owns AI Safety Policy directly and has ultimate oversight of and governance responsibilities for the company's efforts in relation to AI Safety.

We review and update this policy in line with legislative and regulatory changes, as well as in light of the fast-paced nature of innovation in this area.

2. Core Principles

Transparency and Accountability: Decisions influenced by AI systems are documented and made comprehensible for auditing and review, ensuring adherence to FCA guidance on transparency and consumer protection.


Learning and Iteration: AI models are thoroughly reviewed by experts to verify their authenticity and reliability, with specific measures in place for reevaluation of any updates upon model retraining. Precise statistics and methodologies are available to customers on request.


Privacy and Fairness: We comply with all relevant data protection laws, and deliver fair outcomes by implementing our data protection by design approach.


Bias-mitigation and non-discrimination: Our AI systems are regularly assessed for biases and discrimination, ensuring compliance with UK laws and FCA guidance on AI usage.

3. Governance

3.1 Board Meeting


The Board of Directors of CourtCorrect Ltd owns AI Safety Policy directly and has ultimate oversight of and governance responsibilities for the company's efforts in relation to AI Safety.

AI Safety is reviewed at all monthly Board Meetings, with a specific focus on:


  • the company's AI Safety Policy and any changes required;

  • the regulatory, legislative and judicial landscape on AI technologies;

  • the different risks associated with AI use and risk ownership within the company;

  • the logging of action items for individual teams to remediate areas where the risk is deemed material or has changed.


3.2 Internal Audits


Regular internal audits will ensure policy adherence, and auditing frameworks are continuously updated based on technological advancements and research.


3.3 Review Metrics


The effectiveness of our AI systems will be measured by their impact on operational efficiency, the quality of complaints resolution, the ability to derive meaningful insights on complaint root causes and the welfare of both consumers and employees, measured through a combination of survey, feedback, accuracy, evaluation and usage data among other relevant metrics. We review the metrics deployed to assess the effectiveness of our AI systems on an ongoing basis, taking into account both technological and regulatory developments.

4. Development Framework

4.1 Introduction


This section outlines how we embed our core principles and ethical guidelines throughout the development process, to ensure that our models support accurate, non-biased decision making. It also states our approach to ensuring both data privacy and security and detail specifically how this is implemented in our development framework.


4.2 Assessment Methodology


4.2.1 Interactive Testing


Our policy is to vet all AI systems through a combination of automated and subject matter expert valuation by human experts. Our human experts interact with the scenario as they would in a real-life scenario, allowing for dynamic responses and the decision making pathways to be evaluated and analysed in a similar setting to that which might be experienced by our clients. This process not only provides an essential check on the AI’s outputs but also instils a layer of critical human judgement, ensuring a high degree of accuracy and relevance.


4.2.2 Large Sample Consensus Analysis


In situations where there isn’t a clear right or wrong answer, multiple automated systems and human experts can be used to see where the consensus lies, thereby determining the most likely correct response. Our policy mandates periodic, comprehensive assessments of our AI systems. We leverage large scale evaluations that are intensively curated to yield statistically significant insights.


4.2.3 Close Scrutiny Analysis


We also deploy more intensive scrutiny of individual, randomly selected cases. Firstly, we deploy the AI in a testing environment with real-time oversight from human experts who can intervene and correct the AI in case of errors or unexpected behaviour. These results are then analysed and reviewed after the event, with alterations made to source code as appropriate. Models are only shipped to the production environment if they have passed all internal quality checks and once client approval is received, where required.


4.3 Bias-mitigation and Non-Discrimination


4.3.1 Data Quality


We maintain the highest standards for data quality as part of our general commitment to fairness and equity. All departments involved in the collection, processing, storage and use of data for AI model development adhere to our Data Quality Assurance Policy. 

We gather information from a wide range of diverse sources, ensuring it represents a broad spectrum of demographics. This diversity is crucial in training AI systems that are more equitable and less likely to perpetuate existing biases.


4.3.2 Ethical Model Framework


To support our efforts in ensuring the quality of data used in our AI systems, we account for the possible prejudices or bias in the way variables are measured, labelled or aggregated. We further ensure the integrity of our data evaluation framework by defining appropriate objectives, and considering risks posed by model deployment.

We have implemented ICO best practices to develop our approach, which we continually iterate and improve upon according to industry best practice.


4.3.3 Minimising Dataset Biases


Our team's expertise is a cornerstone of our human oversight methodology. With statisticians specialising in sampling and evaluative techniques, our policy is to uphold stringent standards in minimising bias and maintaining scientific rigour throughout the testing process. This includes running specific tests for bias on our models, both to derive qualitative and quantitative insights which can then be used for further development on the AI models. Ongoing review of our bias testing framework as well as the internal expertise of our statistical analysis teams ensures for the continuous identification and removal or minimisation of bias in our models.


4.3.4 Engagement With Customers


We pursue a proactive approach to AI Safety, where we encourage customers to consider how AI might support minimising bias in their own processes. AI can be a powerful tool to minimise inconsistencies in our clients’ operations and can also support clients to identify cases where special considerations (e.g. due to customer vulnerability) might be appropriate. This approach underlines our commitment to ensuring that our AI solutions are part of a positive transformation towards greater justice and equality in decision-making.


4.4 Data Privacy


4.4.1 Data Protection by Design


Data privacy is at the core of our data protection by design approach. We comply with GDPR by adopting and developing technical and organisational measures to implement data protection principles effectively. For example, we have developed a data masking capability to redact Personally Identifiable Information (PII) from datasets supplied by our clients or used by our internal AI development teams.


4.4.2 Data Minimisation


As part of our commitment to the principles of GDPR, we adhere to data minimisation standards, to prevent the unnecessary processing and retention of PII. We achieve this by ensuring that only the minimum type of data necessary for the operation of the tool's various features is present in the system. This enables us to provide rationales for the processing or storage of PII for each specific features, for example:


  • Final Response Letter (FRL) Generation: We require only the complaint details, and PII such as an individual's name to whom the letter is addressed to.


  • Vulnerability Flagging: We require information pertaining to customer vulnerability, e.g. life events, capability, resilience.


  • Root Cause Analysis: We require complaint details, and vulnerability data where these intersect in contributing to root causes.


While the system will operate well with this data, it is generally recommended for clients to provide a holistic view of any complaint to the model, including any evidence associated with the complaint. Providing such information will further enhance model performance and also enables more granular root cause analysis reporting.


We work closely with our customers to establish an open line of communication, meaning we can respond to specific concerns and requests regarding data processing. This is part of our commitment to ensuring transparency in respect of the functions and processing of personal data.


4.5 Data Security


4.5.1 Penetration Testing


Penetration Testing: CourtCorrect Ltd. engages Cyberis, a specialist cybersecurity consultancy, to conduct penetration tests, which serve as a form of cyber attack simulation. These tests are designed to assess and enhance the company's readiness to react to cyber threats and incidents. Penetration testing effectively simulates real-world cyber attacks, allowing CourtCorrect to evaluate its defences and response strategies in a controlled environment. 


This approach is a proactive measure to identify vulnerabilities and ensure that the company's cybersecurity measures are robust and effective against potential cyber threats. We received a Cyberis Penetration Test on the 15th of September 2023. No Critical or High risks were found. A total of 12 low risk issues were found, and have all since been treated and resolved. We can provide copies of our most recent penetration test on request.


4.5.2 Automatic Security Checks


We use Snyk to conduct ongoing automatic security checks for technical vulnerabilities. The progression of remediation efforts is monitored and documented in writing, facilitating transparency and oversight. Each identified vulnerability is assigned a deadline for resolution, directly correlating to the level of risk it poses.


4.5.3 Data Loss Prevention (DLP) Methodology


Responsibilities are allocated according to our DLP methodology, delineating clear risk ownership and accountability within the organisation. The Data Security Officer (DSO) oversees the DLP strategy, ensuring compliance with relevant legislation and reporting to the executive management. Data Custodians are responsible for implementing DLP measures within their respective domains.


4.5.4 Risk Assessment Protocol


We conduct risk assessments on a regular basis, in alignment with industry best practices and regulatory expectations, to ensure that the risk posture is continuously understood and managed.

 

The assessment takes into account the potential impact on the confidentiality, integrity, and availability of customer data, considering threats, vulnerabilities, and existing control effectiveness. For identified risks, we develop and document action plans, which are then prioritised based on the level of risk and business impact.


We subject all third party providers to risk assessment procedures that match our internal standards, and select third parties carefully to ensure compliance with GDPR and other relevant regulation.

5. Deployment Framework


5.1 Introduction


To ensure that our AI system is deployed effectively, we have developed a series of best practices and processes that we strictly adhere to. These processes ensure that, once our systems are live, end users are able to obtain the benefits in a safe and secure environment. To deliver the highest deployment standards, we encourage transparent reporting, continuous improvement, and provide users with expert training.


5.2 Staff Bias Training 


CourtCorrect invests in ongoing training and awareness programmes for our staff involved in AI development and data handling, to ensure our team is well-equipped with the knowledge to detect and prevent bias effectively.


5.3 Transparent Reporting and Continuous Improvement


End users can communicate their feedback to us via established channels, which include email, LiveChat as well as directly through functionality on the CourtCorrect platform. We commit to rapid response times where there are reports of perceived bias.


This feedback is then fed into a continuous improvement process, and as part of the AI Safety discussion in each monthly Board Meeting.


5.4 Leading Collaboration and Research Initiatives


CourtCorrect actively engages with the broader AI and ethics community, including collaborations with esteemed institutions like the University of Cambridge. These initiatives focus on developing new methods and technologies to combat AI bias, ensuring our approaches are aligned with cutting-edge developments in the field.


5.5 End-User Training


We provide training and upskilling for customer employees adopting our software, whilst more broadly advocating for education and career advancement within the evolving AI landscape. We ensure that our customers’ employees can identify and flag issues with the AI system.


Our training is designed to assist users with the proficient use of Artificial Intelligence (AI) within our customers’ organisations. This ensures our customers’ employees are well-versed in both the theoretical and practical aspects of AI applications. To provide a balanced view,  training will cover both the benefits of AI integration, and the limitations and ethical considerations posed by AI. 


With our robust safety framework, CourtCorrect aims to align with and shape industry-wide standards and best practices in AI safety and ethics.


5.6 Public Communication and Reporting


Clear, non-technical communication of our AI safety commitment will be maintained, with regular updates provided to the public through our website and newsletters.


Our customers can trust that CourtCorrect is dedicated to delivering AI-assisted outcomes that are as fair and unbiased as possible. We continuously refine our systems to uphold the highest standards of ethical AI usage, ensuring that all measures of protection against racism and bias are deeply embedded within our technology.

6. Final Provisions


This policy will be reviewed at our monthly Board meeting, or more frequently as required, to remain aligned with the latest FCA guidance, AI advancements, and societal expectations.

CourtCorrect also commits to aligning with the EU AI Act and is carefully reviewing the transposition of the EU law by the individual member states.




Contact Information


Questions regarding this policy or its provisions should be directed to:


CourtCorrect Ltd.

33 Percy Street, W1T 2DF, London

hello@courtcorrect.com

+442078673925