More information about what is new with Microsoft Azure storage on Kubecon Europe 2025 | Blog Microsoft Azure

Lords, Kubecon + Cloudnatecon Europe 2025 in London was a fantastic time! Our Azure Storage team has been exchanged at this pulsating community event for the purpose of exchange of knowledge and success. If you miss it, don’t worry! We are here to share what we showed in Con: How do we increase performance, cost efficiency and AI

Lords, Kubecon + Cloudnatecon Europe 2025 in London was a fantastic time! Our Azure Storage team has been exchanged at this pulsating community event for the purpose of exchange of knowledge and success. If you miss it, don’t worry! We are here to share what we showed in Con: How do we increase the performance, cost efficiency and ability of AI for your Work load on Azure.

Optimize your open source databases using Azure discs

Open-source databases such as PostgreSQL, Mariadb and MySQL are among the most commonly deployed status workloads on Kubernetes. For scenarios that require extremely low latency and high-input/output operations per second (IOPS) -HAKO starting these databases for transaction workload-EN container containers for clicking the local ephemeral non-Vovish Express (NVME) units within your node. This provides submilselsecond latency and up to half a million IOPS, making it a critical chain. In our upcoming V1.3.0 update, we performed significant optimization for the databases.

Compared to the previous version V1.2.0, you can connect 5 times the transactions per second (TPS) for PostgreSQL and MySQL deployment. If you are looking for the best balance of durability, performance and storage costs, the premium SSD V2 discs remain our recommended default settings for the database workload. The Premium SSD V2 offers a flexible pricing model that is loaded on Gigabyte and includes generous basic IOP and permeability for being outside the box. If necessary, you can dynamically scalp IOPS and permeability, allowing you to fine -tune your performance while optimizing price efficiency.

In Kubecon, we demonstrated how developers could easily use the local NVMe and Premium SSD V2s to create their highly avian and postgreSQL deployment. If you want to watch along yourself, check out the newly found postgreSQL documentation below!

Accelerate your workflow AI using Azure Blob storage

Creating an application for workflow AI scalable storage for hosting massive love for data accompaniment These are protocols of raw sensors, high -resolution images, or checkpoints for more terarabbyte. Azure Blob Storage and Blobfuse2 with container storage control (CSI) provide a hassle -free way to store and load these data on a scale. With Blobfuse2, you approach the Blob storage as well as the persistent volume and treat it as a local file. With the latest version Blobfuse2 – 2.4.1You can:

  • Acceleration of model training and inference: Enhanced streaming support Blobfuse2 decreased latency for initial and repeated reading. The use of Blobfuse2 to load large data sets or finely fine -tuned model weight directly from the Blob storage to the local NVMe Discs on the GPU really optimizes the efficiency of the workflow AI.
  • Simplify data pre -work: Workflows AI often require frequent transformations – if they normalize images or tokenizing text. Using access based on the Blobfuse2 file, scientists can prepwork data and store the results directly to the Blob storage, maintaining the pipe effective.
  • Ensure the integrity of data on a scale: When handling the petabytes of data streaming, it depends on the integrity control. Blobfuse2 now includes improved Validation CRC64 For data stored on the local disk, reading and writing, as well as when working with distributed AI clusters.
  • Parallel access of massive data sets: Implemented parallel download and recording that indicates the time it takes to access large data files stored in blocks. This enhancement allows faster data processing and increased efficiency, ensuring optimum use of GPU resources and improving efficient training.

Expand your status workload using Azure files

Continuous integration and continuous delivery/deployment (CI/CD), one of the popular status workloads, needs shared volume persistence to host artifacts in restilation, where Azure Premium Files are storing a Selection on Azure. These artifacts are stored in small files that have twisted heavy metadata operations to file sharing. To speed up CI/CD, Azure Files team recently announced the general availability of metadata storage for premium SMB file sharing. This new capacity reduces metadata latency by up to 50 Pierge, which benefits from the workload of metadata, which usually host Mary Small files in one share. In Kubecon, we showed how the cache in the metadata cache can speed up repeated processes assembly on Github, Fer Repo and try it yourself.

For less status load -bearing load, standard files with the new V2 invoicing model provides better predictability and cost control because shared volumes persist. The V2 provided is shifted from based on the invoicing based on the provided billing, allowing you to specific required storage, IOP and permeability with a larger scale. Now you can expand the share of files from 32 GIB to 256 TIB, 50,000 IOPS and 5 GIB/sec.

Kubecon + CloudNatecon was a great opportunity to communicate directly with developers and learn from our customers. As always, thanks to our customers and partners for contributing to the value and averageness of the event, and we look forward to seeing you again for Kubecon North America in November!

Leave a Comment