About: http://data.cimple.eu/news-article/f0c40e1afd807b75f43aa0d2e4a37584b5c89d6a80b833f6f9669a7c     Goto   Sponge   NotDistinct   Permalink

An Entity of Type : schema:NewsArticle, within Data Space : data.cimple.eu associated with source document(s)

AttributesValues
rdf:type
schema:articleBody
  • Inspur, a leading data center and AI full-stack solutions provider, releases NF5468M6 and NF5468A5 AI servers supporting the latest NVIDIA A100 PCIe Gen 4 GPU at ISC High Performance 2020. It will provide AI users around the world with the ultimate AI computing platform with superior performance and flexibility. Thanks to its agile and strong product design and development capabilities, Inspur is one of the first in the industry to support the NVIDIA A100 Tensor Core GPU and build up a comprehensive and competitive next-generation AI computing platform. The A100 GPU brings unprecedented versatility by accelerating a full range of precisions-from FP32 to FP16 to INT8 and all the way down to INT4. This includes the new TF32 precision, which works like FP32 while providing 20X higher FLOPS for AI without requiring any code change. In addition, the NVIDIA A100 offers multi-instance GPU technology, which enables a single GPU to be partitioned into seven hardware-isolated instances to work on multiple networks simultaneously. At present, Inspur's two new products-NF5488M5-D and NF5488A5 with the NVIDIA A100 have taken the lead in mass production. The newly released NF5468M6 and NF5468A5 present many innovative designs and strike a balance between superior performance and flexibility, which well meets increasingly complex and diverse AI computing needs. NF5468M6 and NF5468A5 can offer superb computing performance for high-performance computing and cloud application scenarios. NF5468M6 and NF5468A5 accommodate eight double-width A100 PCIe cards in a 4U chassis. Both support the latest PCIe Gen4 of 64GB/s bi-directional bandwidth, delivering a 100% increase in bandwidth compared to PCIe Gen3 with the same power consumption. Its superior performance will meet the requirements of the most complex challenges in data science, high-performance computing, and artificial intelligence. Besides, 40GB of HBM2 memory increases memory bandwidth by 70% to 1.6TB/s, allowing users to train larger deep learning models. The unique NVIDIA NVLink bridge design can provide P2P performance of up to 600GB/s between two GPUs, resulting in significant increases in training efficiency Furthermore, another two leading AI servers of Inspur, NF5468M5 and NF5280M5, also support NVIDIA A100 PCIe Gen 4. As the world's leading AI server manufacturer, Inspur offers an extensive range of AI products, and works closely with AI customers to improve AI application performance in different scenarios such as voice, semantic, image, video, and search. About Inspur Inspur is a leading provider of data center infrastructure, cloud computing, and AI solutions, ranking among the world's top 3 server manufacturers. Through engineering and innovation, Inspur delivers cutting-edge computing hardware design and extensive product offerings to address important technology arenas like open computing, cloud data center, AI, and deep learning. Performance-optimized and purpose-built, our world-class solutions empower customers to tackle specific workloads and real-world challenges. To learn more, please go to www.inspursystems.com. View source version on businesswire.com: Contact Fiona Liu Liuxuan01@inspur.com © 2020 Business Wire, Inc. Disclaimer: This material is not an AFP editorial material, and AFP shall not bear responsibility for the accuracy of its content. In case you have any questions about the content, kindly refer to the contact person/entity mentioned in the text of the release.
schema:headline
  • Press Release from Business Wire: Inspur
schema:mentions
schema:author
schema:datePublished
http://data.cimple...sPoliticalLeaning
http://data.cimple...logy#hasSentiment
http://data.cimple...readability_score
http://data.cimple...tology#hasEmotion
Faceted Search & Find service v1.16.115 as of Oct 09 2023


Alternative Linked Data Documents: ODE     Content Formats:   [cxml] [csv]     RDF   [text] [turtle] [ld+json] [rdf+json] [rdf+xml]     ODATA   [atom+xml] [odata+json]     Microdata   [microdata+json] [html]    About   
This material is Open Knowledge   W3C Semantic Web Technology [RDF Data] Valid XHTML + RDFa
OpenLink Virtuoso version 07.20.3238 as of Jul 16 2024, on Linux (x86_64-pc-linux-musl), Single-Server Edition (126 GB total memory, 11 GB memory in use)
Data on this page belongs to its respective rights holders.
Virtuoso Faceted Browser Copyright © 2009-2025 OpenLink Software