Large language models (LLMs) show significant promise in transforming business operations. They utilize generative AI algorithms to create content, predict new trends, and discern key patterns across enormous amounts of information—much faster and at a far greater scale than teams of human counterparts.

You can now generate vast quantities of simple code via AI, freeing your engineers’ bandwidth to dig into more complex projects. Or boost your human-in-the-loop automation capabilities to run more robust automated processes with stronger quality control.

However, the challenge for many businesses is that organizing data in a way that is easy to view, understand, and manipulate for modeling purposes by LLMs remains elusive.

As LLM usage increases across the marketplace, we explain how this technology has risen to this stage, how industry players are guarding against misuse, and how your enterprise can maximize the potential LLMs can unlock for your business today.

 

OpenAI disrupts the market with ChatGPT
 

Last November, research laboratory OpenAI launched ChatGPT, an AI chatbot built on top of OpenAI’s GPT-3.5 and GPT-4 foundational GPT LLMs. Within five days of launch, ChatGPT had more than one million users. By January, UBS estimated the number at more than 100 million. The New York Times hailed it as “the best AI chatbot ever released to the general public.”

ChatGPT’s natural language processing tools allow users to ask the bot any type of question or request. Students and workers quickly turned to ChatGPT to write essays, emails, and articles on their behalf.

These practical applications are a large part of why the bot achieved such instant success. Rather than launch ChatGPT as a B2B product, OpenAI’s public launch allowed many consumers to tangibly benefit from LLMs and generative AI models for the first time.

Big tech scrambled to catch up, shifting resources so top engineers could expedite competing models. Google released its LLM-powered AI chatbot, Google Bard, in February.

 

What does this mean for the enterprise? 

While consumer adoption drove early success, the business applications of LLMs are transformative. ChatGPT is trained on a massive dataset with an architecture that allows for easy scaling across a wide variety of industries, applications, and use cases. GPT-3.5’s model had 175 billion parameters; GPT-4 is rumored to have one trillion.

LLMs like ChatGPT can be fine-tuned for individual tasks, boosting performance and adaptability across an organization. Aside from content creation, ChatGPT can research and analyze a near-endless array of topics and rapidly generate data reports. It can automate customer service across any language. It can help onboard new employees, identify knowledge gaps in your current workforce, and suggest resources to fill those gaps and grow.

To seize the opportunity LLMs provide, executives need to ask the right questions about their business to make improvements they might not realize they need. A more complete view of finances, sales processes, and supply chains helps execs appreciate the revenue-generating use cases big data systems can drive.

There will be winners and losers across every industry when it comes to big data implementation. First-mover advantage is crucial to harness the power of LLMs before competitors and gain an edge in the market.

 

What do you need to watch out for?

In March, tech nonprofit the Future of Life Institute penned an open letter to AI laboratories to “immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” The letter, signed by Elon Musk and Apple co-founder Steve Wozniak among other tech leaders, starts by stating “AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs.”

Despite the data-informed nature of LLMs, they are still prone to errors, biases, and hallucinations. “Large language models have no idea of the underlying reality that language describes,” said Yann LeCun, chief AI scientist at Meta. “[They] generate text that sounds fine, grammatically, semantically, but they don’t really have an objective other than satisfying statistical consistency with the prompt."

Information shared during a conversation is sometimes retained on an AI engine’s servers. This dataset potentially includes sensitive personally identifiable information (PII). In the event of a data breach, retrieval and deletion of this information can prove challenging.

This specific scenario arose at Samsung in April, when in three instances employees leaked confidential intellectual property (IP) to ChatGPT. In two of these instances, software engineers pasted confidential source code into the chat to check for errors and request optimization. In the third, an employee uploaded a recording of a meeting for the chatbot to convert into notes. As a result, Samsung banned employee usage of generative AI tools on company-owned devices and internal networks.

 

Responsible use of generative AI

As LLM and generative AI technology is still in its infancy, enterprises will need to safeguard against privacy concerns, ethical questions, and misinformation.

Regulation of generative AI and LLMs has quickly become a hot-button issue at the highest levels of government. In May, AI regulation dominated discussion heading into the 2023 G7 Summit. Then in July, Amazon, Google, and Meta agreed to new voluntary AI guardrails after pressure from the White House, to ensure they manage the risks the new tools pose even as they compete to build solutions in the space.

It is essential to develop a comprehensive understanding of the technology to address concerns regarding data protection and responsible usage. Implementing the following steps can help your company prevent data leaks when utilizing generative AI tools:

  • Data sharing limitations: Exercise caution and share only data that is strictly necessary for the tool's operation. Ensure that any sensitive data is either anonymized, obfuscated (e.g., by using percentages or scaling factors of 10), or withheld altogether.
  • Employee training: Provide thorough training to all employees who use generative AI tools so they are up to date on data protection and privacy best practices. Help them grasp the associated risks so they can effectively safeguard sensitive IP and proprietary information.
  • In-house data management with local language models: Implementing a company-specific LLM mitigates the risk of data leaks while enabling employees to reap the benefits of generative AI. Bloomberg, for example, built BloombergGPT to better serve its financial customers through a purpose-built generative AI model for finance.

     

Preparing your enterprise to maximize LLMs

Building or licensing an LLM is a significant undertaking with key considerations to weigh before investing. So how do you know if your company is ready and the cost is worthwhile? Key considerations include:

  1. Data availability: Before implementing an LLM, you must ensure you have access to enough data relevant to your business needs. This may require substantial data collection and cleaning efforts.
  2. Development costs: You must first assess whether your company has the necessary computing infrastructure to support the training, fine-tuning, or prompting of an LLM and whether the cost of acquiring and maintaining that infrastructure is justified. Thankfully, advances in software and hardware have reduced these costs substantially since 2020.
  3. Tech expertise: Developing an LLM takes specialized expertise in machine learning, natural language processing, and software engineering. If your company lacks the necessary expertise in-house, you will need to hire or contract external experts.
  4. Use case: You should have a clear understanding of the specific use cases that you want to address beforehand. This will help guide decisions around data collection, model design, and implementation.
  5. Machine learning operations: LLMs demand a high level of maturity in your data collection and machine learning processes. Along with quality data, make sure you have established data management, model training, and model deployment best practices.
     

At AlixPartners, we have found ways for companies to remedy their big data challenges using Palantir’s technology.

By deploying Palantir’s generative AI and machine learning capabilities alongside AlixPartners’ industry expertise, shared clients are better able to harness the power of LLMs, allowing them to do things like:
 

  • Innovate software-based business models on top of their core operating business using natural language, further democratizing data analytics and workflows.
  • Handle critical disruptions, such as supply chain breakages and consumer behavior shifts.
  • Develop new ways of interacting with customers and employees.

Many companies rely on different software and data tools across different parts of their businesses. Through Palantir’s software, these disparate sources are woven together into a single source of truth, solving complexity challenges.
 

Data for decisions = data for executives

Palantir’s data integration, data application, model development, and analytical capabilities allow all members of an organization to access and make use of high-quality data. Executives better understand how their business is running in real-time by modeling different pathways to improve decision-making and efficiency. And the more executives appreciate the role data plays in boosting bottom lines, the more they are willing to invest in revolutionary big data solutions.

 
Stay tuned for future publications in our series highlighting LLMs, generative AI, and our partnership with Palantir.