On In-context learning and vector databases

In the midst of the buzz surrounding Large Language Models (LLMs), one term that has emerged is In-Context Learning (ICL). This article aims to provide a clear perspective on ICL, shedding light on its definition, its practical applications, its limitations, and its potential for the future.

What is in-context learning?

ICL, or In-Context Learning, is an approach to teaching pre-trained LLMs new information. This becomes particularly important when considering the staggering costs associated with training large language models.

For instance,  GPT-4, a popular LLM, is said to have incurred a mind-boggling cost of approximately $100 million. While this might be mere pocket change for well-funded companies such as OpenAI, backed by tech juggernaut Microsoft, it remains an insurmountable barrier for most organizations. Fortunately, ICL can enable us to teach pre-trained LLMS new information. Who said you couldn’t teach an old dog new tricks?!

The ICL approach involves presenting the LLM with a collection of new information, followed by a prompt about that information.  This technique is particularly valuable for concepts previously unknown to the LLM. Let's look at a simple example. We first introduce the LLM to a set of category examples and subsequently prompt it to categorize an unfamiliar sample using the newly acquired category system.

PROMPT: car = category 1, bike = category 2, skateboarding = category 2, truck = category 1, cycling = category 2. Which category belongs to rollerblading?

Output: Rollerblading is in the same category as skateboarding which is in category 2.

LLM: MPT-30B hosted on Seaplane

For the trained eye, we separated the categories by motor-based and human-powered vehicles. While this is a completely made-up example, the LLM accurately picked the category without retraining the model.

You could make the case that the LLM already had some knowledge about rollerblading and skateboarding, causing it to group them together based on their shared characteristics instead of recognizing them as distinct forms of human-powered transportation. Nevertheless, it did manage to place them in the correct category.

While this is a simple example, the potential for extending ICL to a larger scale example should not be overlooked. For example, if your company uses an internal documentation system, such as Google Docs, Notion, Jira, or any other knowledge management platform, chances are you've experienced the phenomenon of these systems gradually evolving into information black holes sucking the life out of your productivity.

While that might sound a bit discouraging, with the help of an LLM, we can transform those information repositories into a Chat-GPT-style platform trained on your internal knowledge base.

By simply inputting all the information from the repository into the LLM (this is our in-context learning step), you can ask it any question you like and get meaningful answers.

For example, if you include all the documents as context, the LLM can easily address questions like, "What's our strategy for introducing new products?" or "When do we usually publish new blog posts?" It can answer any question you like as long as the answers live somewhere in that treasure trove of internal documentation.

However, this approach has two main problems, and the remainder of this blog will offer solutions to these common issues.

First of all, providing the LLM with all the information stored in your knowledge repository, it decreases the chances of it delivering accurate answers. To illustrate, if only about 1% or even less of the documents you input are truly relevant, then the chances of getting a wrong answer are significantly higher.

Secondly, most, if not all, LLMs are limited by context window. This essentially means there's a cap on the amount of input they can handle. For instance, GPT-3.5 has a context window of 4,000 tokens, which translates to approximately 5,000 words. Although this might seem substantial, it's highly unlikely to be enough to accommodate your entire company's knowledge repository.

Luckily there is a solution, vector databases!

Vector Databases And ICL

ICL does come with its share of challenges, particularly when it comes to the context window restriction of LLMs. However, there's a potential solution in the form of Vector DBs. But before we dive into this solution, let's take a quick moment to understand what vector DBs are, how they function, and how we can leverage them for ICL purposes.

Vector DBs, as the name might suggest, store vectors; you know those pesky things you had to learn about for your linear algebra class?

Putting humor aside, the integration of vector DBs with vector embeddings (a method for converting text into vector forms) offers a great way of transforming and storing vector representations of our knowledge repository. You might be familiar with popular tools such as Pinecone, Quadrant, ChromaDB, etc. Seaplane also comes with a built-in vector DB.

By transforming our text into vectors and housing them within a vector DB, we can tap into a range of useful possibilities. For example, we can now employ search algorithms such as k-nearest neighbor (KNN) and other methods to run similarity searches between input queries and our documents.

This approach has the potential to tackle the two issues ICL and LLMs have to overcome. Instead of overwhelming the LLM with our entire knowledge repository, we can now selectively feed it the most relevant documents i.e. KNN, where K=7 will only return the seven most relevant docs. This resolves the problem of context window and ensures that the LLM is exposed only to materials that are truly relevant.

An in-context learning pipeline utilizing a vector DB might look something like this.

In step one, the user inputs their query. The query is embedded using the same embeddings that were used to turn all documents into vectors. A KNN search is run inside the vector DB to get the K most similar documents. The output of the search + the initial query is sent to the LLM where we ask it.

Answer this question :[query] based on this information [most relevant docs].

This approach has a couple of notable advantages. Most importantly, we can send only the relevant text to the LLM for processing. Reducing the need for very large context windows.

This method does run into a roadblock with certain open-source models, such as MPT-30B, which currently lack support for embeddings.

The good news is that there's a practical workaround. By storing your word embeddings in the vector DB alongside corresponding data objects containing the actual text snippets represented by these embeddings, you can still use the vector store's capabilities for similarity searches.

For these models, without we can use any text embedding, you like. You send the raw text stored in the data object to the LLM once you find the most relevant docs.

Using Vector DBs and any LLM enables anyone to build their own chat GPT on private data without spending millions on training models.

In-context learning is a great tool to build LLM-powered chat applications on your data sources as long as you have access to a vector database, a platform to create embeddings, and an LLM (which you might have guessed are all included in the Seaplane platform). Try it today by signing up for the Seaplane beta.

Does vector-db-powered in-context learning have a future?

As we mentioned earlier, a key motivation for utilizing a vector DB in the context of iICL is to trim down the size of your input queries. However, if you've been following developments in the field of LLMs, you've likely noticed a continuous expansion of the context window of these models.

For instance, while GPT-3.5 started with a context window of 4096 upon its initial release, the latest iteration, GPT-4, supports 32,768. Even more impressively, there are already open-source LLMs with even larger context windows, such as Claude 2, which supports an astonishing 100K context window.

Given that these model context windows are doubling at an almost exponential rate (a nod to Moore's law), you might naturally wonder whether a vector DB is still needed for in-context learning much longer.

While it's challenging to predict the exact future capabilities of these models, we think that vector DBs will continue to play a significant role in the domain of in-context learning, and there are several reasons why.

For starters, commercial LLMs are likely to continue to charge based on the number of tokens used. Meaning it's in your wallet’s best interest to minimize the length of your queries. A simple similarity search can be a valuable tool. Reducing the input prompt can make the difference between a $1K monthly bill and a $10K monthly bill.

Furthermore, even though providing a lot of context often leads to better LLM performance, there's a limit to its effectiveness. Overloading your LLM with excessive information might eventually lead to a worse reply. Increasing the relevance of your input query using vector DBs will only strengthen the output. Remember the old saying garbage in garbage out, it still applies today!

About Seaplane

If you are excited to try out ICL on your own use case, sign up for the seaplane beta today and get instant access, to compute resources, LLMs, and a vector store. All the ingredients you need to deploy your first ICL-enabled LLM application.

Share
Subscribe to our newsletter
Please insert valid email address

Bring Your Apps to the World

Join the Seaplane Beta