16:43 uur 09-07-2015

Mellanox’ EDR Infiniband-oplossing van 100 Gb/s steeds populairder voor schaalbare, krachtige computersystemen

SUNNYVALE, Calif. & YOKNEAM, Israel–(BUSINESS WIRE)-Mellanox Technologies, Ltd., en toonaangevende aanbieder van hoogwaardige, complete tussenkoppelingssystemen voor datacenterservers en opslagsystemen, heeft vandaag bekendgemaakt dat zijn complete EDR InfiniBand-oplossingen voor 100 Gb per seconde in toenemende mate wordt afgenomen binnen de branche.

Mellanox’ adapters, schakelaars, kabels en software zijn de efficiëntste koppelingssystemen voor het verbinden van servers en opslag en leveren een hoge doorvoer, lage latency en prestaties van wereldformaat bij toepassingen van HPC, Web 2.0, databases en clouddatacenters.

Daarnaast heeft Mellanox zijn lijn van schakelsystemen voor EDR 100Gb/s InfiniBand uitgebreid. De nieuwe, niet-blokkerende modulaire schakelaars  voor de CS7520 216-poort en CS7510 324-poort zijn onderdeel van Mellanox’ schakelaarfamilie voor Switch-IB-based EDR 100Gb/s InfiniBand, waar ook de 1U-schakelaar SB7700 de SB7790 bij horen.

Mellanox EDR 100Gb/s InfiniBand Solutions Gain Momentum With Industry-Wide Adoption to Further Enable Scalable High-Performance Computing Systems

SUNNYVALE, Calif. & YOKNEAM, Israel–(BUSINESS WIRE)– Mellanox® Technologies, Ltd. (NASDAQ: MLNX), a leading supplier of high-performance, end-to-end interconnect solutions for data center servers and storage systems, today announced the growing industry-wide adoption of its end-to-end EDR 100Gb/s InfiniBand solutions.

Mellanox EDR 100Gb/s InfiniBand adapters, switches, cables and software are the most efficient interconnect solutions for connecting servers and storage, delivering high throughput, low latency and world leading application performance for HPC, Web 2.0, database and cloud data centers.

In addition, Mellanox today expanded its line of EDR 100Gb/s InfiniBand switch systems. The new CS7520 216-port and CS7510 324-port non-blocking modular switches are part of the Mellanox Switch-IB-based EDR 100Gb/s InfiniBand switch family which also includes the SB7700 and SB7790 36-port 1U switches and the CS7500 648-port modular switch; all with seamless fabric management capabilities to ensure the highest fabric performance. The new switches give users the flexibility to optimize their data center connectivity for highest application performance and overall data center return on investment.

Key Features

  • CS7520: 216 EDR 100Gb/s InfiniBand QSFP28 ports in a 12U switch. 43Tb/s switching capacity. Sub 0.5us port latency.
  • CS7510: 324 EDR 100Gb/s InfiniBand QSFP28 ports in a 16U switch. 64Tb/s switching capacity. Sub 0.5us port latency.

“Mellanox’s EDR 100Gb/s InfiniBand solutions provide world-leading, scalable performance to address the bandwidth requirements of the next generation, scalable high-performance compute and storage platforms,” said Gilad Shainer, vice president of marketing at Mellanox Technologies. “Mellanox’s Switch-IB-based modular switches are the industry’s highest performing solutions for high-performance computing, Web 2.0, database and cloud data centers. Mellanox’s EDR 100Gb/s InfiniBand enables these centers to deliver high application performance, to become more efficient and to reduce their operating expenses.”

“As the leading supplier of InfiniBand Storage to Data Intensive Global Enterprises, DDN has leveraged Mellanox componentry and switching for years to deliver the industry’s fastest storage solutions,” said Bret Weber, chief technology officer and vice president of engineering, DDN. “At ISC, we will be demonstrating the world’s fastest Lustre appliance in conjunction with Mellanox’s EDR 100Gb/s InfiniBand interconnect.”

“High-performance computing infrastructures require highly efficient systems and scalable, performance-based solutions to solve the world’s most difficult scientific, engineering and data analysis problems,” said Ingolf Staerk, sr. director HPC at Fujitsu. “Fujitsu’s PRIMERGY HPC solutions, combined with Mellanox’s EDR 100Gb/s InfiniBand solutions, will enable our customers with outstanding application runtime performance, scalability, and return-on investment.”

“To solve the world’s most challenging problems, scientists, engineers and analysts push the boundaries of performance,” said Bill Mannel, vice president and general manager, HPC and Big Data, HP. “The combination of HP’s Apollo systems and Mellanox’s EDR 100Gb/s InfiniBand solutions provide breakthrough, energy-efficient performance for our customers’ HPC workloads.”

“Interconnect is one of the key elements to enable system performance and efficiency. Mellanox EDR 100Gb/s InfiniBand is the highest performance interconnect by far,” said Mr. Qiu Long, General Manager of Servers Domain, IT Product Line, Huawei. “Being one of the earliest EDR adopters, Huawei FusionServer can deliver unmatched flexibility and scalability to run customers application with Mellanox EDR 100Gb/s InfiniBand interconnect.”

“Systems based on POWER CPUs dramatically accelerate technical computing and data analytics workloads,” said Sumit Gupta, vice president and business line executive, IBM High Performance Computing and OpenPOWER operations. “When a cluster of POWER-based systems operate on these compute and data intensive workloads, the ability of Mellanox’s EDR InfiniBand ConnectX-4 adapters and Switch-IB switches to run at 100Gb/s enables much better cluster scaling and reduces network latency to achieve real-time results.”

“Mellanox’s EDR 100Gb/s InfiniBand will make an immediate impact for our customers and for our business with the 100Gb/s interconnect performance,” said Jun Liu, general manager of HPC at Inspur. “Integrating Mellanox’s EDR 100Gb/s InfiniBand solutions into our next generation products will allow us to have the flexibility and scalability we didn’t have before to make our customer’s infrastructure run as efficient and productive as possible.”

“High-performance computing applications require the highest computing and interconnect performance,” said Brian Connors, VP Strategic Alliances & GM HPC business line, Lenovo. “The combination of Lenovo servers and Mellanox EDR 100Gb/s InfiniBand interconnects delivers superior performance capabilities to deliver faster results for our customers.”

“SGI considers Mellanox a strategic partner because of their ability to meet our customer demands with industry standard InfiniBand networking that addresses application performance, system size, and budget needs. The launch of Mellanox EDR 100Gb/s InfiniBand is continued evidence of their innovation,” said Gabriel Broner, vice president and general manager, high performance computing business unit, SGI. “SGI will deploy EDR 100Gb/s InfiniBand to provide our customers with unmatched flexibility and performance. SGI looks forward to continuing the partnership as we scale our technology to meet the rising data demands of today’s leading high performance computing and enterprise businesses.”

“A huge need for today’s HPC compute-intensive applications is being able to scale with the highest performance and efficiency,” said Mr. Zhennan Cao, general manager of HPC business unit at Sugon. “Mellanox’s EDR 100Gb/s InfiniBand solutions provide our customers the power to scale at the speed of 100Gb/s and reduce their application runtime with our Sugon 6000 and Silicon Cube series High Performance Computing solutions.”

“AppliedMicro is pleased to work with Mellanox to enable highly efficient, scale-out solutions for a variety of HPC workloads with our family of X-Gene 64-bit ARM server processors,” said John Williams, vice president of marketing at AppliedMicro. “The Mellanox EDR 100Gb/s InfiniBand interconnect solutions are an ideal complement to X-Gene processors maximizing workload performance and power efficiency to optimize data center TCO.”

“As processor performance increases, high-bandwidth interconnects are critical to avoid bottlenecks in high-performance computing environments,” said Gopal Hegde, VP/GM Data Processor Group, Cavium. “Cavium 48-core ThunderX ARMv8 SOCs are optimized to deliver best-in-class performance for these workloads, and Mellanox industry-leading EDR 100GB/s InfiniBand will help ensure these high performance computing clusters will scale and perform at the highest levels.”

See EDR 100Gb/s InfiniBand at ISC High Performance 2015 (July 12-16, 2015):

Multiple Mellanox EDR 100Gb/s InfiniBand products and demonstrations can be seen across the exhibit hall at ISC High Performance at the following company booths: Applied Micro (booth #1431), Bull (booth #1230), Cavium (booth #1105), DDN (booth #1010), Dell (booth #731), HP (booth #732), Huawei (booth #1126), IBM (booth #928), Inspur (booth #840), Lenovo (booth #1020), Megware (booth #1330), Mellanox (booth #905), RSC Group (booth #912), SGI (booth #910), Sugon (booth #800), Supermicro (booth #1130) and T-Platforms (booth #1240). For more information on Mellanox’s event and speaking activities at ISC’15, please visit

Supporting Resources:

About Mellanox

Mellanox Technologies is a leading supplier of end-to-end InfiniBand and Ethernet interconnect solutions and services for servers and storage. Mellanox interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance capability. Mellanox offers a choice of fast interconnect products: adapters, switches, software, cables and silicon that accelerate application runtime and maximize business results for a wide range of markets including high-performance computing, enterprise data centers, Web 2.0, cloud, storage and financial services. More information is available at

Mellanox, ConnectX and Connect-IB are registered trademarks of Mellanox Technologies, Ltd. Switch-IB is a trademark of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.


Mellanox Technologies, Ltd.

Public Relations and Communications
Allyson Scott,
Investor Contact

Mellanox Technologies
Gwyn Lauber,
PR Contact

Gelbart Kahana Investor Relations
Sharon Levin,

Check out our twitter: @NewsNovumpr