How to run and deploy LLMs using Red Hat OpenShift AI on a Red Hat OpenShift Service on AWS cluster
Learn how to install the Red Hat® OpenShift® AI (RHOAI) operator and Jupyter notebook, create an Amazon S3 bucket, and run the LLM model on a Red Hat OpenShift Service on AWS (ROSA) cluster.
Disclaimer: this content is authored by Red Hat experts, but has not yet been tested on every supported configuration.
Overview
Large Language Models (LLMs) are a specific type of generative artificial intelligence (AI) focused on processing and generating human language. They can understand, generate, and manipulate human language in response to various tasks and prompts.
This learning path is an example of how to run and deploy LLMs on a Red Hat OpenShift Service on AWS (ROSA) cluster, which is our managed OpenShift platform on AWS, using Red Hat OpenShift AI (RHOAI), our OpenShift platform for managing the entire lifecycle of AI/ML projects. And we will utilize the Amazon S3 bucket to store the model output. In essence, here we will first install RHOAI operator and Jupyter notebook, create the S3 bucket, and then run the model.
What do you need before starting?
What is included in this learning path?
- Prerequisites
- Installing RHOAI and Jupyter notebook
- Creating and granting access to S3 bucket
- Training LLM model
- Future research
- Performing hyperparameter tuning
What will you get?
- Experience running and deploying LLMs on a ROSA cluster
- An understanding of how to use OpenShift AI to manage the lifecycle of AI/ML projects
- Familiarity with how to use the Amazon S3 bucket to store the model output
This learning path is for operations teams or system administrators.
Developers might want to check out how to create a natural language processing (NLP) application using Red Hat OpenShift AI on developers.redhat.com.