While the generative AI (GenAI) revolution is rolling forward at full steam, it’s not without its share of fear, uncertainty, and doubt.
The great promises that can be delivered through large language models (LLMs) are tainted by concerns over hallucinations, bias, data security, “black-box” decisioning, and outdated information. Enterprises are addressing many of these issues through foundation LLMs as well as their own LLM implementations. Many are also exploring the potential of retrieval-augmented generation (RAG) environments to serve as the connective tissue between corporate databases and LLMs.
These are the key takeaways from a recent survey on LLM and RAG adoption, explored in a survey of 382 executives and managers conducted by Unisphere Research, a service of Information Today, Inc., in partnership with Semantic Web Company, and covering organizations primarily within North America with revenues exceeding $500 million annually.
LLMs are becoming pervasive across most organizations, the survey found. Their use is most pervasive in the testing and development stage. Eighty-five percent of respondents are either exploring and testing their potential or have LLMs in production. At least another 27% already have LLMs in production in some capacity.
Only 7% report no activity at this time. Nine in 10 respondents will keep expanding their LLM implementations.
Plus, 68% rely on outside models, such as OpenAI’s ChatGPT, Claude, or Midjourney. Only a handful of enterprises are relying on their own internal LLM or GenAI services.
There are compelling reasons to move to LLMs to deliver insights. Respondents with LLMs in production are most likely to look for internal productivity benefits from GenAI. A majority, 67%, are seeking to help employees access insights, followed closely by 65% expecting employee productivity gains, and another 65% seeking to reduce the time for knowledge workers to access the information they require. In contrast, the leading benefit cited overall—including those still exploring LLMs—is potential productivity benefits.
At the same time, there is rising concern about risks associated with expanding LLM deployments, the survey shows. More than 7 in 10 respondents, 71%, see their increased usage of GenAI as a risk in terms of security and output. Interestingly, 10% do not see risk, which may also suggest they see AI tools helping to address security issues.
Data quality concerns top the list of issues that organizations face with GenAI and LLM implementations, cited by a majority of respondents at 71%. A majority also see data security and privacy concerns as pressing challenges. Respondents with LLMs in production, 89%, agree, almost unanimously, that it’s important to some degree to have a human in the loop for their GenAI and LLM systems.
Close to a third of GenAI users are looking to RAG environments to support their information handling. More than 1 in 4 respondents at current LLM/AI sites, 29%, report that they either have RAG solutions in place or are implementing these solutions. Most agree that their businesses will depend on it for not just technical capability, but for competitive advantage.
Close to half agree that RAG will help make information more actionable and closer to real time.
This is how one survey respondent put it: RAG “helps by making AI smarter and efficient. It does so by connecting AI with other organizations’ unique data, which supports these systems to generate responses that are both more accurate and more contextually relevant.”
RAG environments retrieve and store data in a variety of sources, domains, and formats. Relational databases, knowledge graphs, and vector databases compete for the top database technologies that are interfacing with RAG implementations.
Benefits anticipated through RAG-empowered AI implementations include improved contextual results, more actionable data, and reduced time to insight, cited by a close to a majority of respondents now using AI at 48%.
Generative AI and RAG technologies are still both in the early stages of their deployments. Most report their efforts are still immature, with applications within the testing and development stage.
Still, enthusiasm is prevalent. GenAI and LLMs are expanding across most of the organizations participating in this survey, with the majority employing or testing AI to enhance the delivery of knowledge across their enterprises, especially within content creation, content customization, customer self-service, knowledge discovery, knowledge management, intelligent search, and assisting customer service staff.