Home Gaming Intel Pushes Texture Compression to 18x With TSNC, Gaining VRAM and Storage...

Intel Pushes Texture Compression to 18x With TSNC, Gaining VRAM and Storage Without Sacrificing Image

6
0

Textures up to 18x lighter without breaking the image. Enough to compact the installations and free up VRAM, with a contained visual impact.

Intel TSNC: principles and objectives

Intel presents Texture Set Neural Compression (TSNC), a neural network encoding/decoding chain applied to BCn textures. The AI ​​encoder compresses the data into a latent space, the decoder reconstructs the textures at runtime.

Intel Pushes Texture Compression to 18x With TSNC, Gaining VRAM and Storage Without Sacrificing Image

Two variants are planned depending on the target. Variant A targets up to a 9x ratio with low visual drop-off, measured at ~5% via NVIDIA’s FLIP. Variant B pushes up to 18x with a measured degradation of up to ~7%.

Expected gains: size, VRAM, throughput

AI models are trained on millions of standardized textures to replace traditional BCn with more compact sets. Targeted result: smaller installations, shorter loads, reduced VRAM pressure and improved throughput on recent GPUs.

Graph of compression ratios TSNC variant A compared to BC1 and BC3.

Performances et intégration GPU

On a “Panther Lake” system with Arc B390 iGPU and XMX cores, Intel announces an average latency of ~0.194 ns to produce the first pixel texture via the model. At this level, the overhead is supposed to remain imperceptible in game.

The pipeline targets smooth execution in decompression on the GPU side, conditioned by XMX acceleration. The first deliverables are expected in alpha this year, followed by a beta then a stable version without a firm timetable.

Graph of compression ratios TSNC variant B compared to BC3.

For the evaluation, Intel relies on NVIDIA’s FLIP to quantify perceptual deltas. The visuals provided illustrate slight differences in A, more visible in B at maximum compression, but consistent with the announced ratios.

If the implementation becomes widespread, the operational benefit is clear for PC gaming: significantly reduced installation sizes, reduced disk caches and texture streaming, lower memory pressure and increased performance margins on iGPUs/GPUs equipped with AI units. There remains the adoption of tools/engines and hardware coverage excluding XMX, which will determine the real impact in production.

Source : TechPowerUp