Xilinx plans to acquire Israeli company Mellanox | CGOCMALL

Welcome to CGOCmall ! | Register

Home > Industry information > Xilinx Plans To Acquire Israeli Company Mellanox

Xilinx Plans To Acquire Israeli Company Mellanox

Auth:CGOCMALL Date:2018/11/7 Source:CGOCMALL Visit:237 Related Key Words: Newest Xilinx Mellanox CGOCMALL
According to foreign media CNBC, FPGA giant Xilinx has hired Barclays Bank for its acquisition of Mellanox. According to sources, the deal will not be reached in the near future and may end in abortion. Two of the sources said that if the transaction can finally be reached, it is expected to be announced in December.

Mellanox is an Israeli-based company founded in 1999. According to the company's official website, they are a global leader in providing end-to-end Infiniband and Ethernet connectivity solutions for servers and storage. They provide an interconnect solution that delivers high data center efficiency and fast data transfer between applications and systems to increase system availability with low latency and high throughput. Mellanox provides the industry with technologies and products that accelerate the interconnection of internal devices, including network adapters, switches, software and chips that accelerate application execution and maximize HPC, enterprise data centers, Web 2.0, and cloud computing, storage, and efficiency of financial services.

According to media reports, the acquisition of Mellanox will allow Xilinx to acquire more products that can be sold in the data center market.

We know that with the use of multiple multicore processors in servers, storage and embedded systems, I/O bandwidth cannot keep up with processor advancements, causing performance bottlenecks. Fast data access has become a key requirement for leveraging the increased computing power of microprocessors. In addition, interconnect latency has become a limiting factor in the overall performance of the cluster.

In addition, the growing use of clustered servers and storage systems as key IT tools has led to an increase in the complexity of interconnect configurations. The number of configurations and connections in the EDC has also increased, making system management more complex and expensive to operate. In addition, managing multiple software applications with different interconnect infrastructure has become increasingly complex.

Furthermore, as additional computing and storage systems or nodes are added to the cluster, the interconnect must be able to scale to provide an expected increase in cluster performance. In addition, more focus on data center energy efficiency is leading IT managers to adopt a more energy-efficient approach.

At the same time, most interconnect solutions are not designed to provide a reliable connection when used in a large cluster environment, resulting in data transfer disruptions. Because more applications in EDC share the same interconnect, advanced traffic management and application partitioning are required to maintain stability and reduce system downtime. Most interconnect solutions do not offer this type of functionality.

According to Wikipedia, Mellanox is immersed in InfiniBand, a computer network communication standard for high-performance computing. It has extremely high throughput and extremely low latency for data interconnection between computers and computers. InfiniBand is also used as a direct or switched interconnect between servers and storage systems, as well as interconnections between storage systems. Overall, this technology has the following advantages:

First, InfiniBand is designed to be implemented in ICs to ease the communication processing capabilities of the CPU compared to other interconnect technologies designed to rely heavily on communication processing. InfiniBand delivers superior bandwidth and latency over other existing interconnect technologies, and each successive product maintains this advantage. For example, our current InfiniBand adapters and switches offer up to 100Gb / s of bandwidth with end-to-end latency below microseconds. In addition, InfiniBand leverages the PCI Express I/O capabilities, the high-speed system bus interface standard.

Second, according to an independent benchmark report, the latency of the InfiniBand solution is less than half that of a test Ethernet solution. Fibre Channel is only used as a storage interconnect and is usually not benchmarked for latency performance. HPC typically requires a low latency interconnect solution. In addition, there are more and more delay-sensitive applications in the cloud, Web 2.0, storage, machine learning and embedded markets, so there is a trend to provide 10Gb / s and faster industry standard InfiniBand and Ethernet solutions. Lower latency than 1Gb / s Ethernet.

Third, while other interconnects require separate cables to connect servers, storage and communications infrastructure equipment, InfiniBand allows multiple I/Os to be integrated on a single cable or backplane interconnect, for blade servers and embedded systems It is important. InfiniBand also integrates clustering, communication, storage and management of data type transfers over a single connection.

Fourth, InfiniBand was developed to provide high scalability for multiple systems. InfiniBand provides communication processing in hardware, alleviating the CPU for this task and enabling full resource utilization for each node added to the cluster.

Fifth, InfiniBand is one of the only industry-standard, high-performance interconnect solutions that provide reliable end-to-end data connectivity in silicon hardware. In addition, InfiniBand also facilitates the deployment of virtualization solutions, allowing multiple applications to run on the same interconnect of a dedicated application partition. As a result, multiple applications run simultaneously on a stable connection, minimizing downtime.

Mallanox has a full range of 200G products ranging from optical modules, network cards, switches, servers to active optical cables. They also offer RDMA (Remote Direct Memory Access) technology, GPU Direct RDMA technology, and SHARP (Scalable Hierarchical). Aggregation and Reduction Protocol technology, NVMe Over Fabric Target Offload technology, SHIELD (Self-Healing Technology) technology, and Socket Direct and Multi-Host technologies are also believed to be one of the reasons Xilinx sees them.

But we should see that as an interconnect technology, IB has a strong competitive relationship with Ethernet, Fibre Channel and other proprietary technologies such as Clay's SeaStar. Mellanox, which lacks the server core CPU, was acquired by Xilinx, which may be a good choice.

Industry information

Product Index :

客户服务
live chat
客服系统
live chat