Super X AI Technology Limited (NASDAQ: SUPX) (“the Company” or “SuperX”) today announced the official launch of its latest All-in-One Multi-Model Servers (“MMS”). As the first enterprise-grade AI infrastructure to support the dynamic collaboration of multiple models by SuperX, this MMS is centered on being out-of-the-box ready, multi-model fused, and deeply integrated into application scenarios. It provides enterprises with a secure and efficient full-stack AI solutions, with customized specifications for enterprises of various scales, marking a new era where large model applications evolve towards the collaboration of multi-model intelligent agents. This is an integrated solution offered to large enterprise AI clients, following the debut of SuperX’s XN9160-B200 AI Server on July 30, 2025, expanding SuperX’s portfolio of enterprise AI infrastructure products.
To help enterprise clients secure a decisive competitive edge in the new wave of generative AI, this All-In-One Multi-Model Server comes pre-configured with high-performance large language models (LLMs), including the newly released top-tier open-source models GPT-OSS-120B and GPT-OSS-20B from OpenAI. Benchmark results show that GPT-OSS-120B achieves—and in some key tests like Massive Multitask Language Understanding (MMLU) and American Invitational Mathematics Examination (AIME) even surpasses—the performance of several leading closed-source models according to OpenAI’s press release on Aug 5, 2025. That means SuperX’s customers will be able to gain world-class AI inference and knowledge-processing capabilities at superior cost efficiency.
Read Also: Omdia Strengthens APAC Leadership with Addition of Tech Research Asia Team




































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































