Microsoft AKS updates 2024 - Q3
Within this blog, I want to give an overview of all the feature in Q3 2024 that becomes available in General Availability, Technical Preview or End of Support by Microsoft. This information can be found at Microsoft Azure Updates.
Features that are now supported by Microsoft (GA):
- [General available] Long-term support for version 1.27 and 1.30 in Azure Kubernetes Service (AKS)
To help you manage your Kubernetes version upgrades, AKS provides a long-term support (LTS) option, which extends the support window for a Kubernetes version to give you more time to plan and test upgrades to newer Kubernetes versions. AKS now supports 1.27 version as well as 1.30 version in LTS. To learn more, click here. - [General available] New features in AKS extension for Visual Studio Code
The Azure Kubernetes Service (AKS) Visual Studio Code extension has been updated to support the ability to attach an ACR to your cluster, generate Kubernetes deployment files, generate dockerfiles and generate GitHub Actions. To use these new features, make sure your extension is up to date. Learn more about the AKS Visual Studio Code extension’s releases and interact with the team here. o download the Azure Kubernetes Service VS Code Extension, please visit the marketplace. - [General available] Dev Containers templates for Azure SQL Database
The Dev Containers templates for Azure SQL Database are now generally available. These templates provide a streamlined and efficient way to set up development environments with all necessary tools and dependencies pre-configured. Available for .NET Aspire, .NET 8, Node.js, and Python, you can now seamlessly integrate Azure SQL Database into your development workflow, ensuring a consistent and productive experience. Adopting dev containers offers several advantages: efficient local development, cost-efficiency, faster time-to-market, and alignment with cloud-native trends. Start today and boost your productivity with these ready-to-use development containers. To learn more, click here. - [General available] Operator and CRD support with Azure Monitor managed service for Prometheus
Azure Monitor managed service for Prometheus now supports CRD-based configs for scrape jobs to collect metrics from workloads running in your AKS cluster. With this new update, configuring Managed Prometheus will deploy the Pod and Service Monitor custom resource definitions to allow you to create your own custom resources. This is similar to the OSS Prometheus Operator, and allows for easy configuration of scrape jobs in any namespace, eliminating the need to update the common ConfigMap in the kube-system namespace. To learn more, click here. - [General available] Model fine-tuning support for Kubernetes AI Toolchain Operator (KAITO)
With parameter-efficient fine-tuning (PEFT) support, you can now customize pre-trained models from the KAITO repository to your data and use cases directly in your cluster, while maintaining the data compliance rules for your organization. Simply choose a target model, select one of the various tuning methods, point to retraining data and where to store fine-tuning results in your KAITO tuning workspace. This allows you to serve your “smarter” model and simplify ML lifecycle management directly from the AKS cluster. To learn more, click Fine-tuning API documentation and New models supported in KAITO repository. - [General available] OS SKU in-place migration for AKS
Today, traditional OS SKU migration involves creating a new node, cordoning and draining existing nodes, and then deleting existing nodes. This can involve a large surge of new nodes and operational overhead to cordon and drain existing nodepools. The OS SKU in-place migration feature, now GA, allows you to trigger a node image upgrade between one Linux SKU (i.e. Ubuntu) to another (i.e. Azure Linux) on an existing nodepool. To learn more, click here. - [General available] Azure Container Storage for Ephemeral (Local NVMe/Temp SSD) and Azure Disk
You can now use Azure Container Storage to run production level stateful container workloads. Azure Container Storage orchestrates the placement and lifecycle of persistent volumes (PV) on your behalf, simplifying container storage management and optimizing for scalability, flexibility, and cost efficiency. Tightly integrated with Kubernetes, it allows you to perform all storage operations via the Kubernetes API, such as creating PVs and scaling up capacity on demand, eliminating the need to interact with control plan APIs of the underlying storage infrastructure. With our general availability, two backing storage options are fully supported:- Ephemeral disk (Local NVMe/Temp SSD): With the Ephemeral disk backing storage option, you can take advantage of the local storage that comes with your nodes. For instance, workloads requiring low latencies could benefit from locally attached storage. You can also enable replication on your local NVMe storage to experience added resiliency.
- Azure Disk: Azure Disk storage option lets you choose from Ultra, Premium SSD, Premium SSD v2, and Standard SSD disk types to back your storage pool. With Azure Container Storage, your PVs are optimally placed on your disk - mapping multiple volumes to a disk, overcoming traditional persistent volume scale limitations, and saving you storage costs in the long run.
Get started today with installing Azure Container Storage to your AKS cluster! For a comprehensive guide, watch our step-by-step walkthrough video. You can also explore workload samples from our newly launched community repository to create your first stateful application. To learn more, read the Azure blog and technical community blog refer to the documentation.
- Ephemeral disk (Local NVMe/Temp SSD): With the Ephemeral disk backing storage option, you can take advantage of the local storage that comes with your nodes. For instance, workloads requiring low latencies could benefit from locally attached storage. You can also enable replication on your local NVMe storage to experience added resiliency.
- [General available] OS Security Patch channel for Linux in AKS
OS security patch channel for Linux, part of NodeOSUpgrade feature, is now generally available. OS security patches are AKS-tested, fully managed, and applied with safe deployment practices. AKS regularly updates the node's virtual hard disk (VHD) with patches from the image maintainer labeled "security only." This channel part of nodeosupgrade feature honors maintenance windows and limits disruption by applying live patching wherever necessary. To learn more, click here. - [General available] az command invoke in AKS
AKS run command allows users to remotely invoke commands in an AKS cluster through the AKS API. For example, this feature introduces a new API that supports execute just-in-time commands from a remote laptop for a private cluster. This can greatly assist with quick just-in-time access to a private cluster when the client is not on the cluster private network while still retaining and enforcing full RBAC controls and private API server. For example, az aks command invoke "kubectl get nodes". To learn more, click here.
Features that are currently in Public Preview and not yet GA
- [Public Preview] Virtual machines node pools support in AKS
With virtual machines node pools, Azure Kubernetes Service directly manages the provisioning and bootstrapping of every single node. Typically, when deploying a workload onto Azure Kubernetes Service (AKS), each node pool can only contain one virtual machine (VM) type or SKU. Virtual Machines node pools allow the capability to add multiple VM SKUs of a similar family to a single node pool. Virtual Machines node pools allow you to specify a family of SKUs for a node pool without the need to maintain one node pool per SKU type, reducing the node pool footprint. To learn more, click here. - [Public Preview] Advanced Container Networking Services: Enhancing security and observability in AKS
Advanced Container Networking Service offers advanced security feature, FQDN filtering. FQDN filtering allows you to define granular network policies based on domain names rather than IP addresses. This simplifies policy management, reduces administrative overhead, and ensures consistent policy enforcement across the network. By restricting access to specific domains, FQDN filtering helps prevent unauthorized access and mitigate security risks. To complement FQDN filtering, the HA DNS proxy ensures uninterrupted DNS resolution. This redundancy enhances the overall reliability and availability of your containerized applications, minimizing downtime and disruptions. Read more about FQDN Filtering and its capabilities by reading the documentation, blog announcement, and to learn about how to configure FQDN filtering and try it out on your clusters today. - [Public Preview] FIPS mutability support in AKS
The Federal Information Processing Standard (FIPS) 140-2 is a US government standard that defines minimum security requirements for cryptographic modules in information technology products and systems. Azure Kubernetes Service (AKS) allows you to create Linux and Windows node pools with FIPS 140-2 enabled. Deployments running on FIPS-enabled node pools can use those cryptographic modules to provide increased security and help meet security controls as part of FedRAMP compliance. For more information on FIPS 140-2, see Federal Information Processing Standard (FIPS) 140/". With FIPS mutability, you can now enable or disable FIPS on an existing node pool. When you update an existing node pool, the node image will change from the current image to the recommended FIPS image of the same OS SKU. This will immediately trigger a reimage. When migrating your application to FIPS, first validate that your application is working properly in a test environment before migrating it to a production environment. To learn more, click here. - [Public Preview] Azure CNI Powered by Cilium & Azure CNI Overlay support in AKS
Public preview of Azure CNI Overlay dual-stack with Azure CNI powered by Cilium for Linux clusters in AKS is now available. This enhancement enables AKS clusters to support IPv4 and IPv6 network policies, providing greater flexibility and control over network traffic within your Kubernetes environments. Additionally, Azure CNI powered by Cilium offers improved performance with its efficient dataplane, enhancing the overall networking performance of your workloads. To learn more, click here. - [Public Preview] Azure CNI overlay Windows support in AKS
Azure CNI Overlay dual-stack support for Windows nodepools in AKS is now in public preview. This new capability allows AKS clusters to support dual-stack IPv4 and IPv6 network configurations on Windows nodes, enhancing flexibility and control over network traffic within your Kubernetes environments. To learn more, click here. - [Public Preview] Windows Annual Channel on AKS
Windows Server Annual Channel for Containers is now in public preview on Azure Kubernetes Service (AKS). With the Annual Channel, you’ll not only get the latest release in Windows Server but also get the added benefit of portability between container hosts and container images. Windows Server 2022 container images are able to run on the newer Annual Channel container host, therefore easing migration pains when upgrading. To learn more, click here.
Features that are announced as open-source
- [Announced] kube-egress-gateway for Kubernetes
kube-egress-gateway is an open-source project that offers a scalable and cost-efficient solution for configuring fixed source IPs for Kubernetes pod egress traffic on Azure. The kube-egress-gateway components run within Kubernetes clusters—whether managed (Azure Kubernetes Service, AKS) or unmanaged—and use one or more dedicated Kubernetes nodes as pod egress gateways, routing pod outbound traffic through a WireGuard tunnel. Compared to existing methods, such as creating dedicated Kubernetes nodes with a NAT gateway or assigning instance-level public IP addresses and scheduling only specific pods on these nodes, kube-egress-gateway is more cost-efficient. It allows pods requiring different egress IPs to share the same gateway and be scheduled on any regular worker node. To learn more, click here.
Features that are retired
- [Retired] Open Service Mesh add-on for AKS will be retired on September 30, 2027
Following the archival of the Open Service Mesh project, the Istio service mesh add-on was released for Azure Kubernetes Service (AKS) and is Generally Available since February 28,2024. Due to the upstream archival of open-source Open Service Mesh, the Open Service Mesh add-on for AKS will no longer receive any new minor version releases or new features and will be retired on September 30, 2027. Security patch updates will be provided up until September 30, 2027. You can continue to use the existing Open Service Mesh add-on for Azure Kubernetes Service (AKS) until September 30, 2027 or transition to Istio add-on for Azure Kubernetes Service (AKS) before that time. Migration guidance from Open Service Mesh (OSM) configurations to Istio can be found here. There are no changes in pricing between the two add-ons . - [Retired] AKS GPU image (preview) will retire on Jan 10, 2025
You will no longer be able to create new GPU-enabled node pools with the GPU image. Microsoft recommends customers with GPU image enabled node pools to migrate existing workloads onto GPU-enabled node pools created with the AKS supported solutions of (1) Default Experience with NVIDIA device plugin installation, or (2) GPU Operator.
To learn more about this retirement, visit on AKS. You can view the alternative options for NVIDIA GPUs on AKS. - [Retired] Azure HDInsight on AKS will retire on January 31, 2025
On January 31, 2025, Microsoft will retire Azure HDInsight on AKS. The remaining clusters on your subscription will be stopped and removed from the host. Before January 31, 2025, you will need to migrate your workloads to Microsoft Fabric or an equivalent Azure product to avoid abrupt termination of your workloads.
Required action:
To avoid service disruptions, migrate your workloads from Azure HDInsight on AKS to Microsoft Fabric or an equivalent Azure product by January 31, 2025.