How to access open-sourced LLMs from Huggingface on Colab

Share it with your senior IT friends and colleagues
Reading Time: 2 minutes

Namaste and Welcome to Build It Yourself.

The tutorial will show us how to access the latest open-sourced large language models like Gemma, Llama2, Mixtral, etc. from Huggingface using LangChain on Google Colab

If you are an entrepreneur or a senior professional (15+ years of experience) and looking to learn AI + LLM in a simple language, check out the courses and other details – https://www.aimletc.com/online-instructor-led-ai-llm-coaching-for-it-technical-professionals/

Why this article is needed?

In the previous article, we learnt how to use OpenAI’s models. The latest models from OpenAI are proprietary models and OpenAI charges money to access them.

However, there are many open-sourced large language models from Google, Meta, Mistral, etc which can also be used.

These models are present on Huggingface and we can access them for free.

So, In this article, we will learn how to use them with very simple code.

We will use this Google Colab Notebook – https://github.com/tayaln/Calling-opensource-LLM—Huggingface

Let us dive into it.

Pre-requirement

– An Open Mind to learn new things

– Huggingface account

– Huggingface’s Access Key

Access open-sourced LLMs from Huggingface on Google Colab

Step 1 – Install Langchain

Langchain in an open-sourced development framework. It can help us in accessing any open-sourced LLM.

Step 2 – Install langchain community

As per Pypi.org

LangChain Community contains third-party integrations that implement the base interfaces defined in LangChain Core, making them ready-to-use in any LangChain application.”

Step 3 – Get your Huggingface token and add that. 

Step 4a – Add the model you want to access

Step 4b – Add value to the parameters like Temperature and max_length

Temperature decides whether you want the model to be creative or not. The higher the temperature, the higher the randomness (creativity)

Max_length – As the name suggests, it is the maximum number of tokens you want the model to generate

Step 5 – Use the predict function to generate the response from the LLM.

Dedicated AI + LLM Coaching for Senior IT Professionals

In case you are looking to learn AI + Gen AI in an instructor-led live class environment, check out these dedicated courses for senior IT professionals here

Pricing for AI courses for senior IT professionals – https://www.aimletc.com/ai-ml-etc-course-offerings-pricing/

My Name is Nikhilesh and if you have any feedback/suggestions on this article, please feel free to connect with me – https://www.linkedin.com/in/nikhileshtayal/

Happy learning!

Feature Image Source

Share it with your senior IT friends and colleagues
Nikhilesh Tayal
Nikhilesh Tayal
Articles: 75