Just a month after announcing the mass production of HBM4 chips for AI accelerators, Samsung has unveiled its seventh-generation high-bandwidth memory, HBM4E. It offers even faster performance than HBM4 and is expected to be used in Nvidia’s next-generation AI accelerator platform.Samsung's HBM4E chips will be used in Nvidia's Vera Rubin Ultra AI platformAt Nvidia GTC 2026 in San Jose, California, Samsung Electronics showcased its HBM4E chip for the first time, becoming the first company in the world to do so. The chip delivers data transfer speeds of up to 16Gbps per pin and bandwidth of up to 4TB/s. It is expected to be used in Nvidia’s Vera Rubin Ultra platform, which could launch in the second half of 2027. So, the mass production could start sometime in the first half of next year.Its HBM4 chips, which are being used in Nvidia's Vera Rubin AI platform, exceeds the industry standard of 8Gbps, and offers data transfer speeds of up to 11.7Gbps. It can be further enhanced to offer up to 13Gbps data transfer speeds.HBM5 and HBM5E chips have also been announcedSamsung also announced plans for future HBM technologies. The company intends to use its 1c DRAM process along with a 2nm foundry process to develop eighth-generation HBM5 chips. For ninth-generation HBM5E chips, Samsung plans to use its 1d DRAM process alongside a 2nm foundry process.The company also showcased its Hybrid Copper Bonding (HCB) technology, which is expected to serve as a key differentiator for its future HBM chips compared to competitors. It improves heat resistance by more than 20 percent compared to the existing Thermal Compression Bonding (TCB) technology.At the Nvidia Gallery part of Samsung's booth at GTC 2026, the South Korean firm displayed HBM4 chips, the PM1763 SSD, and the SOCAMM2 memory module, which are designed for Nvidia's AI infrastructure. Nvidia CEO Jensen Huang visited the booth and signed “Amazing HBM4” on an HBM4 wafer.SOCAMM2, which is made using low-power DRAM chips, is currently in mass production. Samsung is the world's first company to start the mass production of these chips and it is the optimum memory module for servers.Samsung is also showcasing the PM1763 SSD, featuring the latest PCIe 6.0 interface, as the next-generation AI storage solution. It will be demonstrated on servers using the Nvidia SCADA programming model. It is also showcasing the PM1753 SSD's enhanced energy efficiency and system performance using the NVIDIA BlueField-4 STX reference architecture.