Techconative Logo

 Busting common Myths in developing bespoke LLMs

Fri Feb 09 2024

AI | ML | LLM | MYTHS

Busting common Myths in developing bespoke LLMsimage

The advent of the LLM revolution marks a transformative era, empowering Machine Learning models to elevate businesses to heights previously unattainable. This breakthrough stems from the innovative fusion of human language interfaces with other advanced technologies such as programming languages and agents, unlocking possibilities that were once beyond reach.

Anyone who is playing around with platforms like ChatGPT, Claude, would come up with creative ways to accelerate their business problems using LLMs.

Navigating the intricate landscape of business problem-solving, we frequently encounter pervasive myths surrounding the efficacy of LLMs. This blog aims to transcend these misconceptions, offering a deeper exploration into how LLMs can indeed be transformative, unlocking new horizons in business innovation and solution crafting.

Substantial solutions demand the utilisation of ChatGPT

Need not be!

We’re able to have great breakthroughs on chat based solutions for some of our customer problems with modest models. Successful companies like Tabnine were able to capture the market with models of maximum size of 3B, as per Tabnine’s CEO claim.

So start small! We don’t need a space-rocket for everything!

Fine-tuning needs loads of script

Well, this was the case around a few months back for LLMs!

Based on our needs and capacities we can choose the best one from the following:

  • Libraries: Lit-GPT, Ludwig , etc, which allows you to train open source models with just a couple commands.
  • open-source solutions: H2O LLM Studio that eases the process of fine-tuning LLMs UI based interface.
  • Hosted Solutions: Lightning AI from the creators of Lit-GPT.

By leveraging either one of these solutions will help us a jump-start when it comes to fine-tune our models.

Fine-tuning needs huge amount of resources

Depends on our use case!

For the majority of the specific business case we would be good with models less than the size of 13B. By leveraging techniques like LoRA, we can fine-tune models efficiently.

When we have hardware constraints we can rely on optimization techniques like microbatching. Before choosing microbatching it would be great if we do a quick calculation on the resource-to-time trade-off, as with solutions like these you need to spend more time on fine-tuning. And sometimes you might have cost with modest hardware, but you would have spent more with the time that you spent on using it.

While fine-tuning, we also discovered that it's better avoid doing quantization, reserve it for inference.

Full fine-tune will be more efficient than LoRA

Certainly not true!

In many cases LoRA could be almost as efficient as full fine-tune.

With LoRA if the results are not good, start playing around with hyper-params. Try different models before experimenting with full fine-tune.

We should use one of the standard eval metrics

Think deep!

Eugene’s blog section on LLM eval is one of the good summaries of standard eval techniques on LLMs. So it's easy to think that you might need one of those mentioned there(, though the blog encourages on not blindly going with off-the-shelf evals).

When it comes to eval, it's important that you apply the first-principle thinking on arriving at the evaluation metrics for your use case. Even if you have to choose the off-the-shelf evals, choose it considerately.

You need a fully blown MLOps platforms to start with

Think twice before making this call!

MLOps platforms are indeed needed when you productionize the model and systems start using it. But you might not need a full-blown MLOps platform at the early stages of experimenting with models.

Simple version control tools like Github(backed by GitLFS) can be your Model registry(part of MLOps platforms that stores and versions models) to store and version your models. Also remember to track important aspects like hyper-params, prompt-template, eval metrics, etc. Not to mention that, when you use LoRA the size of model output that you store will be considerably less.

Once you get to the stage of productionizing, a quick script would be able to migrate the data from Github to MLOps.

Closing Notes

We have addressed several current prevalent myths in the development of bespoke LLMs and it's important to recognize the dynamic nature of the AI and technology landscape. Innovations and methodologies are evolving rapidly, introducing new possibilities and debunking outdated beliefs. It is crucial for data scientists and businesses alike to stay informed and adaptable, keeping an eye on the latest advancements and resources. This openness will ensure that we can effectively leverage LLMs to their fullest potential, navigating the challenges and opportunities that lie ahead in this exciting field.

Feel free to reach us out for your bespoke LLM developments!

We would love to hear from you! Reach us @

info@techconative.com

Techconative Logo

More than software development, our product engineering services goes beyond backlog and emphasizes best outcomes and experiences.