September 6, 2023

September 6, 2023

September 6, 2023

Build or Buy? The Conundrum Facing Most Enterprise Technology Companies in the Adoption of LLMs

Build or Buy? The Conundrum Facing Most Enterprise Technology Companies in the Adoption of LLMs

We break down whether your firm should build AI systems internally or outsource to specialist providers.

We break down whether your firm should build AI systems internally or outsource to specialist providers.

Enterprise tech is on fire. Around the world, large companies are rushing to adopt Large Language Model (LLM) technologies in order to sell more, cut costs and ultimately drive profitability. The results are evident across the tech world. OpenAI recently surpassed $80 million a month in revenue and surveys have highlighted that one third of companies are already using generative AI in their everyday work.

One of the big questions to answer for any company looking to adopt LLMs is whether or not to go direct to one of the large language model providers and build out workflows that incorporate LLMs or whether to buy a ready-made solution from a specialist provider.

You’re probably familiar with some of the providers of LLMs. The most well-known being OpenAI’s GPT models, which have grown in popularity and usage over the course of the year. Specialist providers build their offering using LLMs, like the GPT model, to provide a unique service. One such example is Jasper, an AI tool that is used to help produce and edit content. CourtCorrect also uses the GPT model as a basis for our AI features.

So, should you go direct to the source and build internally, or use an external provider? We think the answer depends on what it is you’re trying to do.

For generic enquiries, such as question and answer, it makes sense to build a system internally. Systems like ChatGPT are designed to provide a response to a given prompt. Giving it access to internal data will only improve the accuracy with which it is able to respond to queries. With a few simple additions, the system can work well for most companies.

However, for anything with greater complexity, the investment required to see results grows exponentially. This is compounded if the AI is asked to generate text. It requires a lot of additional data and cost to train an AI and to build a strong data pipeline to consistently provide high quality responses in a reliable way.

The problem becomes so much more difficult in regulated environments, where technical and legal expertise are required in order to design reliable systems.

When it comes to regulated complaints management, these requirements are extensive. In the UK financial services sector, the introduction of the FCA’s Consumer Duty has increased the complexity of regulation. Firms have had to change the way they handle complaints to remain compliant. Developing an LLM tool to comply with this regulation would be both costly and time consuming for any company within this sector and would require a degree of specialist legal and technical knowledge that is better placed with specialist companies.

This is where CourtCorrect comes in. It’s an off-the-shelf solution compliant with financial services regulation. What’s more, CourtCorrect’s has access to specialist data published by regulatory bodies within the UK. This has allowed us to build better models and better tools to provide the best possible technology specifically for regulated complaints management. We’ve already done all the hard work and absorbed the costs associated with internal development, allowing immediate deployment and ROI for potential customers.

So, to summarise:

If your use case is Q&A, you may want to go directly to a large open source or foundational model

If your use case is related to generaitng specific types of text content reliably and at scale, there is a significant additional hurdle to overcome in getting LLMs to work for that kind of problem – this is where specialist tools can outperform generic LLM-based workflows

In any case, having access to specialist data helps.

CourtCorrect is of course a specialist provider. Check out how we’ve developed a final response letter generation on top of an LLM stack here.