What amazes me about the new Xeons, though, is how much more there is to them than one might have expected. Intel's architects and designers have crammed formidable new technologies into these chips in order to allow them to scale up to large core counts and multiple sockets. The result may be the most impressive set of CPUs Intel has produced to date, with numbers for core count and throughput that pretty much boggle the mind.
The Haswell-EP family
The first thing one needs to know about Haswell-EP is that it's not just a single chip, but a trio of chips. Intel has moved in recent years toward right-sizing its Xeon silicon for different products, and Haswell-EP takes that trend into new territory. Here are the three members of the family.Code name | Cores/ modules | Threads | Last-level cache size | Process node | Estimated transistors (Millions) | Die area (mm²) |
Haswell-EP | 8 | 16 | 20 MB | 22 nm | 2601 | 354 |
Haswell-EP | 12 | 24 | 30 MB | 22 nm | 3839 | 484 |
Haswell-EP | 18 | 36 | 45 MB | 22 nm | 5569 | 662 |
All three chips are designed on Intel's 22-nm process tech with tri-gate transistors, and they all share the same basic technological DNA. Intel has simply scaled them differently, with quite a bit of separation in terms of size and transistor count between the three options. The biggest of the bunch has a staggering 18 cores, 36 threads, and 45MB of L3 cache. To give you some perspective of this CPU's size, at 662 mm², it's substantially larger than even the biggest GPUs in the world. Nvidia's GK110 is 555 mm², and AMD's Hawaii GPU is 438 mm².
Power to drive your datacenter further
The prior generation of Xeons, code-named Ivy Bridge-EP, topped out at 12 cores, so Haswell-EP offers a 50% increase on that front. Haswell-EP is a "tock" in Intel's so-called "tick-tock" development model, which means it brings a new CPU architecture to a familiar chip fabrication process. There's quite a bit more to this new family than just a revised CPU microarchitecture, though. The entire platform has been reworked, as the diagram below summarizes.The changes really do begin with the transition to Haswell-class CPU cores. These are indeed the same basic cores used across Intel's product portfolio, and by now, their virtues are well known. Through a combination of larger on-chip structures, more execution units, and smarter logic, the Haswell core increases its instruction throughput per clock by about 10% compared to Ivy Bridge before it. That number can go much higher with the use of the new AVX2 instruction set extensions, which have the potential to double vector throughput for both integer and floating-point data types.
For servers in particular, the Haswell core has the potential to boost performance even further via the TSX instruction set extensions, which enable hardware lock elision and restricted transactional memory. The TSX instructions allow the hardware to shoulder much of the burden of making sure concurrent threads don't cause problems for one another. Unfortunately, Intel discovered an erratum in its TSX implementation just prior to the release of Haswell-EP. As a result, the first systems based on this silicon have shipped with TSX disabled via microcode. Users may have the option to enable TSX in a system's BIOS for development purposes, but doing so risks system instability. I'd expect Intel to produce a new stepping of Haswell-EP with the TSX erratum corrected, but we don't yet have a clear timetable for such a move. The firm has hinted that TSX should be production-ready once the larger, multi-socket Haswell-EX parts arrive.
For all the latest information on all new Xeon SERVER Processors visit JPcomputerSolutions.com.a today
No comments:
Post a Comment