In today’s rapidly evolving technological landscape, Generative AI (GenAI) and Large Language Models (LLMs) have emerged as transformative forces – InstructLab aims to simplify and accelerate LLM adoption. From chatbots that provide seamless customer service to content generation tools that fuel creativity, the applications of LLMs are boundless. However, harnessing the full potential of these models requires navigating a complex landscape of implementation and lifecycle management challenges.
At Mobilise, we understand the intricacies involved in deploying and maintaining LLM models. That’s why we are thrilled to announce our new partnership with Red Hat, a leading provider of open-source solutions. As an Advanced Partner – Solution Provider, Red Hat’s expertise complements our commitment to delivering innovative IT solutions. Together, we’re poised to empower businesses to seamlessly integrate LLMs into their operations.
The Challenges of LLM Implementation
Before delving into our solution, let’s explore the common hurdles organisations encounter when working with LLMs.
- Model Selection & Training: Choosing the right model architecture and training it on relevant data can be daunting. The process often requires significant computational resources and specialised expertise.
- Deployment Complexity: Deploying LLMs into production environments demands careful orchestration. Ensuring scalability, performance, and security can be a major undertaking.
- Ongoing Management: LLMs are not static entities. They require continuous monitoring, fine-tuning, and updates to stay relevant and effective.
InstructLab: Streamlining the LLM Lifecycle
InstructLab, inspired by IBM Research’s LAB method, enables the addition of new skills and knowledge to existing LLMs using significantly less human-generated data and computing resources than traditional retraining methods. This makes it possible for community contributions to continuously improve models, fostering a collaborative ecosystem.
How InstructLab Works
The LAB method comprises three key components:
- Taxonomy-driven data curation: Experts curate diverse training examples to seed new skills and knowledge into the model.
- Large-scale synthetic data generation: The model uses these seeds to generate additional examples, which are then automatically refined to ensure quality and safety.
- Dine-tuning the model’s behaviour: The model is retrained on the refined synthetic data, focusing first on knowledge acquisition and then on skill development.
This iterative process allows for regular model enhancements based on community contributions, creating a “tree of skills and knowledge” that benefits all users.
InstructLab empowers users to build custom LLMs by leveraging pre-trained models with core knowledge and seamlessly layering their own domain-specific expertise on top.
How is InstructLab Different?
InstructLab stands out from traditional LLM training methods in several ways:
- Efficiency: It requires significantly less human-generated data and computational resources compared to pretraining and even standard alignment tuning techniques.
- Collaboration: Community contributions directly enhance the model, promoting continuous improvement.
- Flexibility: It works with various LLM models, offering adaptability to different needs.
Building a Custom LLM with InstructLab – A Technical Walkthrough:
The following walkthrough demonstrates the simplicity of running InstructLab locally on an M3 Macbook using the InstructLab README.
- Install instructlab into a temporary Python virtual environment
python3 -m venv --upgrade-deps venv
source venv/bin/activate
pip cache remove llama_cpp_python
pip install 'instructlab[mps]'
- Initialise ilab
ilab config init
(venv) $ ilab config init
Welcome to InstructLab CLI. This guide will help you set up your environment.
Please provide the following values to initiate the environment [press Enter for defaults]: <ENTER>
Path to taxonomy repo [taxonomy]: <ENTER>
taxonomy
seems to not exists or is empty. Should I clone https://github.com/instructlab/taxonomy.git for you? [y/N]: y
Cloning https://github.com/instructlab/taxonomy.git…
- Download the model
ilab model download
- Serve the model
(venv) $ ilab model serve
INFO serve.py:51: serve Using model 'models/merlinite-7b-lab-Q4_K_M.gguf' with -1 gpu-layers and 4096 max context size.
INFO server.py:218: server Starting server process, press CTRL+C to shutdown server…
INFO server.py:219: server After application startup complete see http://127.0.0.1:8000/docs for API.
Press CTRL+C to shut down the server.
- Chat with the model. In a new window use the following command to chat with the model and understand it’s knowledge or skill limitations – what does it know about your use case?
source venv/bin/activate
ilab model chat
2. Create new knowledge and training the model
- Using the taxonomy README, create new knowledge material to be used in training the model. For example, we created information about our business, Mobilise Cloud Services:
<instructlab/taxonomy/knowledge/technology/mobilise/qna.yaml>
version: 2
task_description: "Teach the model about Mobilise Cloud Services"
domain: mobilise
created_by: mob-daviesg
seed_examples:
- question: |
Who are Mobilise Cloud Services?
answer: |
Mobilise Cloud Services are an IT consultancy based in the UK specialising in digital transformation.
- question: |
Where are Mobilise Cloud Services based?
answer: |
Mobilise Cloud Services are based in Bridgend, South Wales but also have offices in London.
- question: |
Who do Mobilise Cloud Services partner with?
answer: |
AWS, Microsoft, CNCF, Elastic, and Red Hat.
- question: |
Who do Mobilise Cloud Services have as customers?
answer: |
Mobilise work across Central Government, National Security, and the private sector.
- question: |
Do Mobilise Cloud Services align to GDS Service standards?
answer: |
Yes Mobilise Cloud Services digital transformation services are directly aligned to both the GDS Service Standard and Technology Code of Practice.
document:
repo: https://github.com/mob-daviesg/instructlab
commit: 56c800fd5246fee3a2f6aa728b5ee0526cde66eb
patterns:
- mobilise.md
- Validate the new knowledge data
(venv) $ ilab taxonomy diff
technology/mobilise/qna.yaml
Taxonomy in $HOME/.local/share/instructlab/taxonomy is valid :)
- Now we must generate a synthetic dataset used to train the model, based on our initial 5 questions. Once this has completed, we will train our model using the synthetic data.
ilab data generate
ilab model train
- Once those tasks have completed (Roughly 45 minutes in our experience), we will test the new model against the old to see if it has improved.
ilab model test
-----model output BEFORE training----:
Mobilise Cloud Services has a diverse range of customers, including Fortune 500 companies, technology startups, and mid-size businesses. These customers span various industries, such as retail, healthcare, finance, and education.
Mobilise Cloud Services' offerings cater to businesses and organizations seeking to streamline their operations, improve efficiency, and enhance their overall digital experience. By adopting Mobilise Cloud Services' solutions, these companies can leverage advanced technologies
==========
-----model output AFTER training----:
==========
Mobilise Cloud Services has a diverse range of customers, including central and local governments, private sector organisations, and educational institutions. They offer cloud transformation services for these clients to help them accelerate their journeys towards cloud platforms and improve efficiency in their operations.
==========
- In the output above, we can see model improvements:
- The response includes specific information concerning working with central government, and private sector organisations,
- The response is also formatted in UK British English Language
Now, we can deploy our refined model and continue the iterative process of testing and improvement. By interacting with the model through the iLab interface, we can gather valuable feedback, refine our question-and-answer pairs, and retrain the model to achieve even better performance.
InstructLab in Action: Real-World Benefits
Having successfully built and refined your custom LLM model using InstructLab, it’s time to unleash its potential within your applications. This is where platforms like OpenShift AI step in, providing the infrastructure and tools needed to seamlessly deploy and manage your models in production environments.
Deploying with OpenShift AI
OpenShift AI is a powerful platform that simplifies the deployment and scaling of AI/ML workloads, including LLMs. By leveraging OpenShift AI, you can:
- Effortless Deployment: Package your InstructLab-trained model into a containerized application and deploy it on OpenShift AI with just a few clicks.
- Scalability: Dynamically scale your LLM model instances to handle varying workloads and ensure optimal performance even during peak usage.
- Monitoring and Management: Gain insights into model performance, resource utilization, and potential bottlenecks through comprehensive monitoring and logging capabilities.
- Security: Safeguard your models and sensitive data with robust security features, including access controls and encryption.
Integrating with Your Application
Once deployed on OpenShift AI, your custom LLM model can be seamlessly integrated into various applications, such as:
- Chatbots: Enhance customer service and support with intelligent and context-aware conversational experiences.
- Content Generation Tools: Empower users to generate high-quality content, from marketing copy to creative writing, with AI assistance.
- Data Analysis and Insights: Uncover hidden patterns and valuable insights from large datasets using the model’s natural language processing capabilities.
- Code Assistants: Boost developer productivity and code quality with intelligent code suggestions and automated refactoring.
The Power of InstructLab and OpenShift AI
The combination of InstructLab and OpenShift AI delivers a powerful solution for organisations seeking to leverage the potential of LLMs. InstructLab empowers you to create customised models tailored to your specific needs, while OpenShift AI provides the robust infrastructure and tools required for efficient deployment and management.
By integrating these cutting-edge technologies, you can unlock a new era of AI-driven innovation within your organisation, enhancing productivity, improving user experiences, and driving business growth.
Eager to put InstructLab and OpenShift AI into action? Reach out to our experts at Mobilise to learn more about how we can help you harness the power of LLMs and accelerate your AI journey.