The evolving role of Retrieval Augmented Generation (RAG) in enhancing Large Language Models for the banking sector

March 26, 2024

Retrieval Augmented Generation (RAG) combines a generative, pre-trained large language model (LLM) with retrieval-augmented in-context learning. A knowledge base is created with finance-specific context documents embedded into a specific vector database. When answering a user query, a retriever finds the relevant parts using vector similarity searches and adds them to the LLM input as context. RAG allows the language model to leverage internal company documents beyond the original training, and it helps banks better make use of Generative AI LLMs. Nonetheless, it is now adapting in the face of competition from longer input windows offered by major commercial AI models.

The role of RAG in LLMs

  • Generative AI is often used to aid a bank’s business process (e.g., sales and marketing, onboarding, customer service, underwriting, transaction processing).
  • The GenAI system must leverage specific banking sectors and internal information to be most effective.
  • Creating a bespoke model is enormously expensive. Fine-tuning an existing foundation model is doable and will make it better adapted to the language of banking, but it does not allow for frequent updates nor precise information (like concrete process steps and decision criteria)
  • Retrieval Augmented Generation has become the most common solution to the specialization problem, as it can make AI systems more adaptable to the specific needs of a financial institution by leveraging internal documents to provide context.
  • It works by building a vector database containing proprietary or domain-specific data embeddings. This information lies in product manuals, FAQs, research reports, customer service guides, and risk document repositories, and it is chunked (partitioned) and embedded to prepare it for inclusion into the vector database.
  • Once the system operates, user queries are converted into vectors to retrieve relevant information from this vector database and fed to the GenAI system. 
  • This increases the accuracy and relevance of the GenAI system’s responses.
  • RAG reduces the need to update or retrain the model. Upload the latest documents or policies, and the model retrieves the information to answer the question.
  • Allianz Bank Italy has used it to optimize knowledge-based processes for its financial consultants.
  • Natwest has an assistant called Marge, who can access real-time data added through content updates or customer interactions.

However, there are some challenges. RAG implementations are complex and may become rigid and brittle, so they will require significant maintenance to keep functional over time as input data and usage patterns change. Additionally, the substantial expansion in the input window by Gemini Pro 1.5 (and soon by its competitors) allows for extended context and could make the need for RAG less critical. An AI system that can ingest and reference several large documents like user manuals, internal norms, or product descriptions in its input window could make RAG perceived as unnecessary by some users.

Both providers and users of RAG are now contemplating how to reimagine the relevance of RAG in the new context. As businesses increasingly rely on LLMs for complex tasks that require synthesizing information from various documents, each with different sources and update frequencies, the functionality of RAG must evolve accordingly. For example:

  • Integration with Advanced Machine Learning Algorithms that specialize in understanding context and relevance.
  • Semantic Search and enhanced Language Processing Techniques with more sophisticated parsing of queries and better recognition of the intent
  • Dynamic Data Source Management with real-time updating of data sources 
  • Secure data retrieval and processing pipelines that protect sensitive information and deliver it as needed to approved receivers.

RAG’s advantages are that it’s adept at managing an entire body of banking or insurance knowledge, making it an ideal solution for financial institutions with extensive documentation and databases. It provides seamless integration with search capabilities and enables the system to pinpoint the most relevant information from a vast dataset. Additionally, RAG can tailor responses to the needs of each bank process and department, involving highly specialized and frequently updated internal information.

  • The ability to cache information with RAG systems streamlines Machine Learning Operations and minimizes the computational load.
  • RAG runs faster as it optimizes data retrieval and reduces processing times, while long context windows may cause latency due to data processing time. 
  • Once implemented, operational running costs can be lower than those associated with long context window systems.

The way banks use RAG will change shortly due to competition and technology dynamics. Improvements in Transformer models: For example, Google probably uses RETRO (Retrieval Enhanced Transformer) or RARR (Retrofit Attribution using Research and Revision), which integrates retrieval mechanisms directly within the Transformer framework. Precise and selective data extraction, increasing the relevance of the generated content, and the effectiveness of RAG models depend on the external knowledge bases they use. 

The collaboration between RAG and a longer input window could lead to improved contextual understanding and more accurate summarization or analysis. LLM-based systems would then be able to leverage large and complex databases and improve the accuracy of their results. Even with an expanded input window, RAG could still help handle more diminutive, more efficient LLMs that do not require the extra sophistication but need to consider multiple internal information sources. On the other hand, an LLM with a large input window alone, without RAG, would be suitable for analyzing named, static large documents (such as textbooks, manuals, or laws).

However, for enterprise analysis of complex bodies of knowledge that are distributed across many documents and may change frequently, the combination of RAG with a long-input-window LLM is the most effective means of generating comprehensive and insightful results for users.

RAG will still be a part of future LLM implementations in banks, but not in the same way as before. The key to continued relevance is greater flexibility, resilience, easy maintenance, and an ability to work together with long context windows for maximum advantage.

Subscribe to our newsletter

Receive the latests news, curated posts and highlights from us. We’ll never spam, we promise.

More From ,

The Data & AI Studio harnesses the power of big data and artificial intelligence to create new and better experiences and services, going above and beyond extracting value out of data and automation. Our aim is to empower clients with a competitive advantage by unlocking the true value of data and AI to create meaningful, actionable, and timely business decisions.