Skip to main content

Your submission was sent successfully! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates from Canonical and upcoming events where you can meet our team.Close

Thank you for contacting us. A member of our team will be in touch shortly. Close

An error occurred while submitting your form. Please try again or file a bug report. Close

  1. Blog
  2. Article

Andreea Munteanu
on 19 March 2025


The landscape of generative AI is rapidly evolving, and building robust, scalable large language model (LLM) applications is becoming a critical need for many organizations. Canonical, in collaboration with NVIDIA, is excited to introduce a reference architecture designed to streamline and optimize the creation of powerful LLM chatbots. This solution leverages the latest NVIDIA AI technology, offering a production-ready AI pipeline built on Kubernetes.

A foundation for advanced AI

This reference architecture is tailored for advanced users familiar with machine learning concepts. It provides a comprehensive framework for deploying complex LLMs like Llama, utilizing OpenSearch as a vector database, and implementing an optimized Retrieval-Augmented Generation (RAG) pipeline. The integration of Kubeflow and KServe ensures a powerful and scalable AI workflow.

The core components

At the heart of this solution lies NVIDIA NIM, a set of easy-to-use inference microservices, which enables optimized and secure deployment of generative AI models and LLMs. NIM provides a standardized format for deployment of foundation models and LLMs fine-tuned on enterprise data, facilitating easy model replacement and offering performance enhancements with forward and backward compatibility. OpenSearch serves as the vector database, enabling efficient storage and retrieval of embeddings for faster and more accurate AI-driven responses within the RAG pipeline.

Kubeflow Pipelines automate data processing and machine learning workflows, ensuring a smooth and scalable data flow. KServe handles model deployment, scaling, and integration with NIM, enabling seamless multi-model deployment and load balancing. A user-friendly Streamlit UI allows for real-time interaction with the AI models, while the Canonical Observability Stack (COS) provides comprehensive monitoring, logging, and metrics.

Key benefits and advantages

This solution offers numerous key benefits, including enhanced security and compliance through continuous vulnerability scanning and centralized logging. It provides comprehensive lifecycle management with rolling upgrades and long-term support. Continuous software improvements ensure access to the latest models and performance optimizations, with enterprise-grade support across the entire stack.

Advanced capabilities for enhanced workflows

Advanced AI workflow capabilities, such as dynamic scaling and multi-model deployment, enable efficient resource utilization. The platform also supports optimized RAG and on-demand fine-tuning, as well as multi-node inference and NVIDIA NeMo integration for high-throughput, low-latency applications. This solution is designed for cross-platform and cloud support, ensuring compatibility with major cloud providers and Kubernetes platforms.

Empowering AI innovation

This reference architecture is ideal for organizations seeking to deploy large-scale generative AI workflows in various use cases, including customer service automation, document processing, healthcare and life sciences, and finance and compliance.

Canonical’s end-to-end generative AI workflows solution, built with NVIDIA AI Enterprise software, offers a scalable, secure, and feature-rich platform for deploying LLMs. It empowers organizations to leverage the power of AI innovation and drive meaningful insights from their data.

Get started today

This reference architecture provides a comprehensive blueprint for building your AI future, offering the insights and tools necessary to deploy advanced generative AI workflows effectively.

Ready to unlock the potential of optimized LLM chatbots with Canonical and NVIDIA?

Download it now

Related posts


Canonical
15 September 2025

Canonical announces it will support and distribute NVIDIA CUDA in Ubuntu

Ubuntu Article

Today Canonical, the publisher of Ubuntu, announced support for the NVIDIA CUDA toolkit and the distribution of CUDA within Ubuntu’s repositories.   CUDA is a parallel computing platform and programming model  that lets developers use NVIDIA GPUs for general-purpose processing. It exposes the GPU’s Single-Instruction Multiple Thread (SIMT ...


Gabriel Aguiar Noury
3 July 2025

JetPack 4 EOL – how to keep your userspace secure during migration

Ubuntu Article

NVIDIA JetPack 4 reached its end-of-life (EOL) in November 2024, marking the end of security updates for this widely deployed stack. JetPack 4 has driven innovation in countless devices powered by NVIDIA Jetson, serving as the foundation of edge AI production deployments across multiple sectors. But now, the absence of security maintenanc ...


Michelle Anne Tabirao
15 May 2025

Building an end-to-end Retrieval- Augmented Generation (RAG) workflow 

AI Article

In this guide, we will take you through setting up a RAG pipeline. We will utilize open source tools such as Charmed OpenSearch for efficient search retrieval and KServe for machine learning inference, specifically in Azure and Ubuntu environments while leveraging silicons. ...