Next steps after running and deploying an LLM using Red Hat OpenShift AI
Congratulations! You have deployed and trained an LLM using Red Hat® OpenShift® AI (RHOAI) on a Red Hat OpenShift Service on AWS (ROSA) cluster.
After completing this tutorial, you now have experience:
- Installing RHOAI and Jupyter notebook
- Creating and granting access to S3 bucket
- Training LLM model
- Future research
- Performing hyperparameter tuning
What comes next?
Next, watch a demonstration of a typical RHOAI workflow that includes text-to-image generation, creating a project, launching a Jupyter notebook with appropriate cluster resources, and training a foundation model from Hugging Face with one’s own data. Once the model is fine-tuned, the demonstrator also automates the build using a data science pipeline and serves the model for use in an AI-enabled application.