Building LLM application: A comprehensive guide for senior IT professionals

Share it with your senior IT friends and colleagues
Reading Time: 5 minutes

Gone are the days when it used to take months to build ML/ AI applications. Now, you can make an LLM application in hours if not minutes.

There are 3000+ LLMs (at the time of writing this article) to choose from. Most of these models are already deployed and you have to just use an API to access them.

And that’s the reason why almost every small or big organization wants to build an application on top of an LLM.

In this article, let’s learn everything we need to know for making LLM applications.

Pricing

Let us start with the most important thing, money. 

  • How much does it cost to make an LLM application?
  • How does the pricing work?
  • Is it affordable to build an LLM application?

If we have to access OpenAI’s models like GPT3.5-instruct or GPT4-0, then we need credits on Open AI. The pricing is based on the number of input tokens and the number of output (generated) tokens.

As of writing this article, this was the pricing for OpenAI’s models

Tokens

Now this brings us to the term – Tokens. 

  • What exactly are tokens? 
  • Is 1 word equal to 1 token? 
  • How tokens are calculated?

Well, 1 word is not equal to 1 token. For the English language, on an average 4 tokens are roughly equal to 3 words.

So, basically, some words are subdivided into 2 or more tokens.

For example, in the above sentences, we can see, the word Prompting was divided into 3 tokens – Prom, pt, and ing

While building any LLM application, it is very important to keep an eye on the number of input & output tokens as the pricing depends on them.

Roles in prompting

We discussed Prompt Engineering in detail in the previous course. We also discussed, how assigning a role makes an LLM’s response more accurate.

Similarly, while designing LLM applications, we should consider roles.

We could assign these 3 major roles:

  • System
  • Assistant
  • User

As we can see in the above example, we have defined 3 roles

System sets the behavior of assistant

Assistant is our chat model which will assist the user by generating answer as per user’s request

User is your end-user, client, etc.

A working example can be seen below image

Building LLM applications

Some of the practical applications of LLMs in business are

1 Call center assistants

2 Intelligent customer self support

3 IT support management

4 Assisting Employees and HR professionals

5 RAG applications for legal professionals

6 Virtual AI teachers 

7 and many more

What can developers do with LLM applications that non-developers (or regular ChatGPT users can not do? 

  • Classification 
  • Summarizing at scale
  • Translation at scale

Classification 

Classification is a very useful use case of LLM. We can build a customer support application in which LLM can classify the user request/ complaint for the right department.

The department representative or another LLM application can then provide an answer/ solution to the user.

In the above example, we have defined 4 major categories (Billing, technical support, Account Management, and General Inquiry) and then further secondary categories.

When we get a query from the user, we will let LLM decide on the primary and secondary categories. 

Based on the category, we can forward the query to the concerned department/ person.

Summarizing at scale

Imagine you own an e-commerce store and you have to summarize 100s of reviews at once. Can you do it using ChatGPT? 

No, you can not, right?

If you use ChatGPT to do summarization at scale then you have to add 1 review at a time and this would be a very time-consuming process.

As a developer, you could build an LLM application using the “For” loop. Check out the below code. 

Translation at scale

Like Summarization, we can do translation at scale by building an LLM application.

Inner Monologue

Inner Monologue is a concept through which we know how the model is thinking. This gives us an idea of how the model generated the response.

In the above example, a user asked a query that – by how much is the BlueWave Chromebook more expensive than the TechPro Desktop?

And then check how the model thought and came up with the conclusion. 

Conclusion

LLMs provide one or the other use case for almost all industries. 

And now with LLMs processing multi-modal capabilities, the use cases have further increased. It is an exciting time to use LLMs and build applications on top of it.

We can either use OpenAI’s models using an API or access open-sourced models present on Huggingface.

With 3000+ LLM options available, let’s build and solve problems for users.

Tailored AI + LLM Coaching for Senior IT Professionals

In case you are looking to learn AI + Gen AI in an instructor-led live class environment, check out these dedicated courses for senior IT professionals here

Pricing for AI courses for senior IT professionals – https://www.aimletc.com/ai-ml-etc-course-offerings-pricing/

My Name is Nikhilesh and if you have any feedback/suggestions on this article, please feel free to connect with me – https://www.linkedin.com/in/nikhileshtayal/

Disclaimer – The images are taken from Deep Learning AI’s course We are just using it for educational purposes. No copyright infringement is intended. In case any part of content belongs to you or someone you know, please contact us and we will give you credit or remove your content.

Share it with your senior IT friends and colleagues
Nikhilesh Tayal
Nikhilesh Tayal
Articles: 67