LangChain: Everything you should know as a working IT professional.

Share it with your senior IT friends and colleagues
Reading Time: 8 minutes

Langchain is an open-source development framework for building LLM applications. 

As we know, LLM-based applications are built on top of an LLM. However, we need many more things to build LLM applications besides the LLM itself. 

Once we have identified which LLM to use, we have to identify multiple other things. Many gaps need to be filled to make a practical LLM-based application 

And that’s where Langchain helps us.

As Langchain has evolved, it has developed an ecosystem in itself with many components and tools

Let us start by talking about Langchain’s components. 

There are 6 major Langchain components:

1. Models

2. Prompt Templates

3. Output Parsers

4. Memory

5. Chains

6. Agents

1. Models

Langnchain provides integration with all open-sourced models through Huggingface. We can also access OpenAI’s models using Langchain.

2 Prompt Templates

Prompt template is a very small but useful feature. 

To understand this let us understand the Without prompt template vs With Prompt template scenario

Without Prompt Template

In the below code, we first define a customer email that we got, style and then add both in a prompt.

If in case there are more styles then we need to write another prompt. For example:

However, the problem is sometimes prompts are very long and writing them again would take a lot of space.

We can solve this problem using Langchain’s Prompt Templates

In the below example, you can see we defined customer_message using a prompt template.

With Prompt Template

Now if we have to add more replies like service_reply and styles like Service style reply, we need not have to create another prompt.

We can just create service_messages using a prompt template.

This was a small but useful feature that makes a prompt reproducible and we can reuse a prompt when we can.

3 Output Parsers

Another small but important feature of Langchain is Output parsers.

The use case is when we want LLM to generate the output in JSON format. 

So, we know that LLM can generate the output in JSON format but sometimes LLM can hallucinate and does not provide the required output.

Now we have this text

And we want the following output in JSON format

We did get the answer 

However, when we tried to access the value using the key, it gave an error

To solve this problem, we can use LangChain’s Structured Output Parser

Now Let us check the results

And now when we try to access the value, we can do it

Langchain’s Output Parser ensures that we will always get the results in the desired format.

4 Memory

Before we tell you, why we need this component, can you pause for a moment and try to answer the question yourself?

We need Memory when we need to design a chatbot.

If we want LLM to be used as a Chatbot then LLM needs to know the previous chat history as well and that’s where Memory comes into the picture.

So Memory enables LLMs to remember previous conversations.

In the example above, you can check that LLM could answer the 3rd question because it had access to the previous 2 conversations.

And this is what happening behind the scenes

There are 4 different kinds of Memories that Langchain supports

  1. Conversation buffer memory
  2. Conversation buffer window memory
  3. Conversation token buffer memory
  4. Conversation summary memory

a) Conversation buffer memory

Here the entire conversation is passed in the memory. With this type of memory, LLM has the entire context and previous history.

However, this could cost us money as the input token size would increase.

So, what’s the solution?

b) Conversation buffer window memory

Here we specify the number of windows that would be passed as memory to LLMs. 

1 window =  1 input + 1 output

You can see, that we have specified k=1 which means LLM would have access to the last conversation only.

c) Conversation token buffer memory

Here instead of specifying the number of  windows, we specify the number of tokens

In the above example, you can see we have specified the max token limit as 50. So, now LLM would have access to the last 50 tokens only. 

d) Conversation summary memory

In Conversation buffer window memory and Conversation token buffer memory, we could save on input tokens but at the same time, LLMs did not get the entire conversation.

When LLMs do not get the entire context, the performance could degrade.

So what is the solution where we could save on tokens as well as send LLMs the entire conversation context?

The solution is Conversation summary memory.

Here instead of sending the entire conversation like Conversation buffer memory, we summarize the conversation and then only send the summary.

This way we save on input tokens as well as send the entire context to the LLM.

5 Chain

A very important component of Langchain is the “Chain”

Consider a scenario where you have to do several tasks, like summarizing, translation, coming up with a description and more.

How can you design a system that will do all the tasks sequentially or non-linearly?

The solution is “Chain”.

There are majorly 3 types of Chains that Langchain supports:

a) Simple Sequential Chain

b) Sequential Chain

c) Router Chain

a) Simple Sequential Chain

When we have to perform tasks in a sequential fashion i.e. Output of the first becomes the input of the second, then we design a Simple sequential chain.

For example: In the below example 

Chain 1 – We are asking LLM to come up with a brand name for a given product

Chain 2 – We are asking for a short description of the brand name generated previously.

So, this is an example of a Simple sequential chain.

b) Sequential chain

In this type of chain, tasks are done sequentially but asynchronously as well. The output of the first need not be the output of 2nd. 

In the above diagram you can see, chain1 and chain3 take the user input, then the output of chain 1 becomes input for Chain2, and then the output of both chain2 and chain3 becomes input for chain 4.

Check out the above example to understand the concept better

c) Router Chain

Here instead of executing all the chains, the input is routed through only the required chain.

For example, in the above example, we have 3 templates – Physics, Maths and History and based on the question it was routed to the Physics template.

If the question is not from any of the 3 domains, then LLM will use its own knowledge to provide the answer.

6 Agents

We have discussed Agents separately in a dedicated article – https://www.aimletc.com/what-are-llm-agents-why-are-everyone-talking-about-them/

AI Agents are a combination of LLM and code. 

  • They have tools to access external applications like Google, Python REPL, etc.
  • They can reason through and divide complex tasks into a number of simple tasks
  • They can self-reflect before giving answers to the user
  • They can collaborate with other agents to perform a series of tasks
  • They can debate as well with each other.

LangChain also supports creating AI Agents using LangGraph

Apart from the above components, Langchain has developed 2 important things:

  • LangServe 
  • LangSmith

LangServe

LangServe helps developers deploy Langchain chains and applications as an API. It uses Pydantic for data validation.

A Javascript client is also available as Langchain.js

LangSmith

Problem – How to monitor latency, cost, and token usage for LLM applications?

Well, the solution is LangSmith.

LangSmith is an LLMOps platform that helps in monitoring Input tokens, output tokens, user input, LLM output, latency, total cost and much more,

Conclusion

This diagram sums up Langchain’s utility:

If you add Langchain Expression Language then you get your prototype

By adding LangServe you get the production ready endpoint and API

And by using LangSmith, you can monitor your application.

To further sum up, Langchain is a very useful framework for developers that allows them to make LLM-based applications faster and easier. 

And developers can further deploy and monitor their applications. 

Tailored AI + LLM Coaching for working IT Professionals

In case you are looking to learn AI + Gen AI in an instructor-led live class environment, check out these dedicated courses for senior IT professionals here

Pricing for AI courses for working IT professionals – https://www.aimletc.com/ai-ml-etc-course-offerings-pricing/

My Name is Nikhilesh and if you have any feedback/suggestions on this article, please feel free to connect with me – https://www.linkedin.com/in/nikhileshtayal/

Disclaimer – The images are taken from Deep Learning AI’s course We are just using it for educational purposes. No copyright infringement is intended. In case any part of content belongs to you or someone you know, please contact us and we will give you credit or remove your content.

Share it with your senior IT friends and colleagues
Nikhilesh Tayal
Nikhilesh Tayal
Articles: 75