
MSI launches scalable AI server solutions with NVIDIA technology
MSI has introduced new AI server solutions using NVIDIA MGX and NVIDIA DGX Station reference architectures designed to support the expanding requirements of enterprise, HPC, and accelerated computing workloads.
The company's new server platforms feature modular and scalable building blocks aimed at addressing increasing AI demands in both enterprise and cloud data centre environments. Danny Hsu, General Manager of Enterprise Platform Solutions at MSI, said, "AI adoption is transforming enterprise data centers as organizations move quickly to integrate advanced AI capabilities. With the explosive growth of generative AI and increasingly diverse workloads, traditional servers can no longer keep pace. MSI's AI solutions, built on the NVIDIA MGX and NVIDIA DGX Station reference architectures, deliver the scalability, flexibility, and performance enterprises need to future-proof their infrastructure and accelerate their AI innovation."
One of the main highlights is a rack solution based on the NVIDIA Enterprise Reference Architecture, comprising a four-node scalable unit constructed on the MSI AI server utilising NVIDIA MGX. Each server in this solution contains eight NVIDIA H200 NVL GPUs, further enhanced by the NVIDIA Spectrum-X networking platform to enable scalable AI workloads. This modular setup provides the capability to expand to a maximum of 32 server systems, meaning up to 256 NVIDIA H200 NVL GPUs can be supported within a single deployment.
MSI states that this architecture is optimised for multi-node AI and hybrid applications and is designed to support complex computational tasks expected in the latest data centre operations. It is built to accommodate a range of use cases, including those leveraging large language models and other demanding AI workloads.
The AI server platforms have been constructed using the NVIDIA MGX modular architecture, establishing a foundation for accelerated computing in AI, HPC, and NVIDIA Omniverse contexts. The MSI 4U AI server provides configuration options using either Intel or AMD CPUs, aimed at large-scale AI projects such as deep learning training and model fine-tuning. The CG480-S5063 platform features dual Intel Xeon 6 processors and eight full-height, full-length dual-width GPU slots that support NVIDIA H200 NVL and NVIDIA RTX PRO 6000 Blackwell Server Edition, with power capacities up to 600W. It offers 32 DDR5 DIMM slots and twenty PCIe 5.0 E1.S NVMe bays for high memory bandwidth and rapid data access, with its modular design supporting both storage needs and scalability.
Another server, the CG290-S3063, is a 2U AI platform also constructed on NVIDIA MGX architecture. It includes a single-socket Intel Xeon 6 processor, 16 DDR5 DIMM slots, and four GPU slots with up to 600W capacity. The CG290-S3063 incorporates PCIe 5.0 expansion, four rear 2.5-inch NVMe bays, and two M.2 NVMe slots to provide support for various AI tasks, from smaller-scale inference to extensive AI training workloads.
MSI's server platforms have been designed for deployment within enterprise-grade AI environments, offering support for the NVIDIA Enterprise AI Factory validated design. This structure provides enterprises with guidance in developing, deploying, and managing AI—including agentic AI and physical AI—as well as high-performance computing tasks on the NVIDIA Blackwell platform using their own infrastructure. The validated design combines accelerated computing, networking, storage, and software components for faster deployment and risk mitigation in AI factory roll-outs.
MSI is also presenting the AI Station CT60-S8060, a workstation built on the NVIDIA DGX Station reference, with components designed to enable data centre-grade AI performance from a desktop environment. This includes the NVIDIA GB300 Grace Blackwell Ultra Desktop Superchip and up to 784GB of coherent memory, intended to boost large-scale training and inference. The solution is targeted at teams requiring a high-performance desktop AI development environment and integrates the NVIDIA AI Enterprise software stack for system capability management.