3 min read
The impact of LLMs

I have worked with LLMs for some months now. These are my take away:

1. There is hype but the impact is real

LLMs are indeed very powerful but still have major limitations. They are a tool that increase human’s productivity. That productivity increase is only significant in some cases.

2. LLMs are a commodity

At the beginning there was GPT3… Now many models exist and a handful are very powerful. So powerful that for most use cases one can swap any model for another one and get the job done. The performance of the top models are within the same order of magnitude.

For example I have worked with Gemini 1.5 pro and Claude 3.5 and found Claude to be noticeably more effective at migrating code for a software infrastructure. Being stuck with Gemini, our team still managed to get a similar level of impact by tweaking the prompt and it the source code provided to the LLM.

3. RAG

For most B2B RAG implementation, I found that starting with a non-vectorized text search can be very effective. This is because in a specific business domain, a keyword based search is very effective at picking up the right document. In some cases, you can even avoid RAG, by tagging documents and using filters based on user context. LLMs with a bigger context window can make such approach easier.

4. LangChain

When in doubt, use LangChain. There are real issues with using LangChain: its APIs are inconsistent, the documentation isn’t great, and the resulting code ugly and full of annoying abstractions.

However, it is also a great tool if the team is new to LLMs and is not yet sure about what the final solution will look like or what model(s) should be used. It is a rich library that is effective at building an MVP.

5. Tooling and consolidation

New methodologies and new vendors of LLM tools and LLM based functionality appear frequently. It means LLMs will become easier and easier to use and their use will consolidate. Abstractions will simplify using LLMs for powerful use cases. Azure AI search is an example of such abstraction.

6. AGI is like fusion reactors

My understanding is that there are serious hurdles to AGI and the current LLMs are not close to the effectiveness required for computers to replace humans for most use cases.

I see it like fusion reactors: I will believe it is there when I see it.