The Road to Large Language Models (LLMs)

Jeremie Blais
May 29, 2023

At Arcurve, we are always staying current with new technologies and innovative approaches to help our clients solve their complex business problems.  The large language models such as ChatGPT, OpenAI and Bard have sparked an AI race that steps forward in a weekly cadence.  While it is still early in this race, we’re watching for the applications of these new tools that show the most promise.  We’ve already seen it with GitHub copilot (AI that helps programmers write code).

Many discussions with our clients center on the decision to build or buy.  The crux of this decision is the level of customization your solution will require.  For example, you can’t buy integration out of the box.  By its very nature, it must be customized.  LLMs will change the marginal cost of customization.  Every business has some messy backend process that they’ve retained a human to complete.  The backend processes that have traditionally been automated, have been deep and narrow.  In other words, a lot of the same prescriptive task.  LLMs will allow for integrations that are shallow and broad, creating efficiencies and a greater ROI.  

Will the AI take our jobs?  

No, AI won’t take your job. Someone who knows how to use AI will.  Just like any other technology, AI and LLMs can be used to create a moat, or cross one.  That moat may be the ability to operate more efficiently or the ability of your competitor to acquire your customers.  Like most other innovation, LLMs will replace jobs in the short term, and move resources to higher value tasks in the long term.  This means a greater number of people will be analyzing the information, rather than simply shoveling it around.  LLMs are a great tool for accomplishing repeatable tasks, but they won’t replace humans just yet, as humans are novelty machines.

So, what are the barriers to entry?  What’s the downside?  

Like all AI, LLMs are trained on content. There’s much debate on content ownership and personally identifiable information (PII).  No business will want confidential internal information leaking to an AI provider, particularly if the AI makes recommendations to other clients with your insight.  This has sparked a debate over which market direction the large language model industry will take.  Will it end up being centralized and integrated, as is the case with chatGPT and Bard?  Or will the open-source models be able to compete?   There’s some evidence to suggest that the large language models don’t need to be so large.  Smaller open-source models trained on higher quality data may offer the ability to use them effectively internally.  In other words, this means data residency will play a role in which solutions are implemented.  It’s not much different than your organization choosing which data center to use in the cloud.

The future of the AI race

The AI race is clearly visible in the large cloud providers.  Microsoft is the largest stake holder in open AI and has already integrated it in Bing and is beginning to implement it across its Azure and O365 offering.  Google has released Bard, and despite a few stumbling blocks, is rapidly releasing new versions.  AWS has released a service, Bedrock, that is meant to be a platform for LLMs.  Databricks has enabled support for an open-source model (Dolly 2.0) as a part of it’s offering.  This competition will define the open or closed model debate.

These vendors all annotate their releases with something along the lines of: “None of the customer’s data is used to train the underlying models, and since all data is encrypted and does not leave a customer’s Virtual Private Cloud (VPC), customers can trust that their data will remain private and confidential.”

The team at Arcurve has years of experience implementing data solutions for our clients. These solutions range from centralizing advanced analytics and reporting in data lakes, building integrations between SaaS and on-prem products, and predictive analytics.  We are implementing these solutions across Azure, AWS, and GCP platforms, including tools like Databricks and Snowflake.  LLMs offer a lot of potential, however, it’s a destination on the data analytics path.  Arcurve clients want to solve simpler problems with a tangible ROI to give them license to solve more complex challenges.  When working with our clients to solve these problems, Arcurve strives to put them on a path that will enable them to use LLMs and other advanced tools to solve more complicated problems.

Stay tuned for an in depth sample of how we can implement LLMs to solve a real business problem in the next post of our LLM series.  

Questions about LLMs, reach out to the Arcurve team here.

Calgary

1700, 308 4th Avenue SW
Calgary, AB T2P 0H7
Canada
403.242.4361

info.calgary@arcurve.com

Houston

5090 Richmond Avenue
Houston TX, 77056
USA
713.422.2135

info.houston@arcurve.com

© 2024 Arcurve. All rights reserved.