The high activity and engagement levels at the recent KubeCon + CloudNativeCon Europe 2024 in Paris underscored the critical role that cloud-native computing is now playing in many organizations’ digital strategies.
AI, storage and infrastructure played a major role in the cloud-native advancements and discussions coming out of KubeCon. A decade after the initial release of Kubernetes, and with 12,000 attendees in person, the March event was the largest KubeCon to date.
The show’s developments showcase the central role the cloud-native ecosystem will play in shaping the future of the industry. The number and range of services on show at KubeCon demonstrate this space is amorphous and must constantly evolve to incorporate broader trends, such as AI.
Complexity and uncertainty might temper adoption, but collaboration, partnership and integration among components and participants will be key to success.
AI, AI everywhere
KubeCon delegates could be forgiven for thinking they were attending an AI event, such was the prominence of the topic in the keynote hall. While unsurprising, it shouldn’t detract from the opportunity, or from the fact that cloud-native principles already underpin key AI projects. Generative AI (GenAI) and large language model vanguards, including OpenAI and Hugging Face, are built on Kubernetes.
Of those organizations already focused on building an AI infrastructure, half already have GenAI in production or in proof of concept, according to recent research by TechTarget’s Enterprise Strategy Group. An additional 42% said they plan to deploy it within the next 12 months.
The Cloud Native Computing Foundation (CNCF) believes that GenAI will radically reshape the infrastructure paradigm — both to accommodate AI workloads and transform platform engineering as it becomes driven by AI insights. It used KubeCon to underscore the extent of the opportunity ahead, and its desire to play an active role.
The foundation has published its first white paper on the topic, detailing key challenges — across the AI lifecycle — and areas of future development as AI transforms the way the industry designs, deploys and manages cloud-native services. Promoting ethical and responsible development will also play a central role.
At the same time, Nvidia used its high-profile sponsorship of the event to underscore its significant engagement around Kubernetes and the cloud-native community in general. Among other things, it discussed how the CNCF’s dynamic resource allocation API can play a key role in enabling developers to optimize GPU resource usage.
There is clearly much work ahead for both the CNCF and industry players to reach this future integrated state, but the proclamations at KubeCon mark an important statement of intent that augurs well for continued development. Moreover, it’s clear that AI is going to have a major impact on the evolution of the underlying infrastructure.
Kubernetes as the ‘control plane for everything’?
One standout aspect that helps explain Kubernetes’ durability over the last decade is the extensibility of its API, which has enabled myriad uses for which it wasn’t originally designed. The evolution of Kubernetes from only supporting stateless applications to supporting stateful workloads — often using multiple databases — has greatly expanded its applicability to AI and many other uses.
This extensibility might potentially extend Kubernetes’ applicability to a much wider set of applications in the future. KubeVirt is a fascinating development here. The Red Hat-originated CNCF project uses KVM to enable VMs — and those virtualized applications that were perhaps considered unsuitable to be containerized — to run on Kubernetes. KubeVirt 1.0 was released last summer, with two updates since then.
It’s very early days of course. However, organizations including Goldman Sachs are talking publicly about the value of using KubeVirt to run VM workloads on top of Kubernetes. This development could shape the future of cloud-native computing, platform engineering and infrastructure more generally. It also potentially speeds up the use of Kubernetes from something that was chiefly deployed in the public cloud to a much more pervasive on-premises technology.
Bringing the data layer — and storage — into sharper focus
The storage-for-Kubernetes market has been a relatively small part of the overall ecosystem to date. However, it is growing in importance as Kubernetes projects become increasingly stateful and hence require persistent storage. In addition, cloud-native initiatives overall have become increasingly critical to the business. Recent Enterprise Strategy Group research showed that 45% of organizations use persistent storage for their container-based applications.
Organizations are therefore more interested in how to protect and manage data, often to be in accordance with enterprise governance mandates. The trick is to do it in a way that doesn’t interfere with a developer-centric philosophy. Developers are not interested in managing storage; they just need it to work when they need it. The business and infrastructure/platform teams, on the other hand, need to know that the data is secure, protected and compliant.
Organizations have shown more interest in various storage-oriented open source projects within the CNCF, such as Rook and Longhorn. Storage-oriented vendors at KubeCon also offered more products optimized for Kubernetes environments, including Portworx by Pure Storage, NetApp, DataCore, Minio and Lightbits.
Vendors such as Portworx focus on enabling “Day 2” capabilities, including backup and DR, as well as making storage as developer-friendly and cheap as possible. They also highlight performance advantages over alternative approaches that rely on Container Storage Interface-based plugins to connect to the underlying storage, which is a growing issue as organizations scale their Kubernetes implementations.
Meanwhile, data protection specialists, such as Veeam — which plays in the Kubernetes space with its Kasten product — encourage platform operations teams to incorporate backup and DR much earlier in the development process.
Simon Robinson is a principal analyst at TechTarget’s Enterprise Strategy Group who focuses on existing and emerging storage and hyperconverged infrastructure technologies, and on related data- and storage-management products and services used by enterprises and service providers.
Enterprise Strategy Group is a division of TechTarget. Its analysts have business relationships with technology vendors.