Huawei’s CloudMatrix 384 AI cluster is making waves with big Chinese tech players. Seems they’re pretty wowed by what this machine can do compared to NVIDIA’s stuff.
So, here’s the deal — Huawei is on a roll, seriously not backing down. They’re pushing their AI game hard, especially since NVIDIA’s kinda shaky in China lately. We’ve talked before about this CloudMatrix 384 thing Huawei cooked up — it’s all homemade tech, and it’s making NVIDIA sweat, at least in China. Crazy, right? Now, according to the Financial Times, ten companies have jumped on this, which is huge for Huawei.
Word is, these big tech firms have already slotted this AI cluster into their data centers. No names, of course, all hush-hush, but they’re supposed to be top-tier Huawei fans. We’ve dug pretty deeply into CloudMatrix 384, but the quick and dirty version? It’s hanging tough against NVIDIA’s beefiest server, the GB200 NVL72. Basically, China’s proving it doesn’t need to rely on anyone else for high-end computing.
Oh, and there’s a neat table here I stumbled upon — it’s got some specs laid out. Anyway, the CloudMatrix 384 (or CM384, if you’re fancy) runs on 384 Ascend 910C chips in what they call an “all-to-all topology.” Whatever that means — I failed geometry. But Huawei’s crammed in five times the chips NVIDIA uses. This cluster cranks out 300 PetaFLOPS of BF16 computing, nearly double what the GB200 does. But here’s the kicker: it slurps up power like crazy — we’re talking 3.9 times what NVIDIA uses. I mean, Ouch.
Price tag alert — if you want one, it’ll set you back $8 million. Yeah, you read right. That’s almost triple what you’d pay for NVIDIA’s setup. So, Huawei’s not playing the cheap game — they’re in it for all-human-made in-house cred. It’s a power move, not exactly a bargain basket offer.
Man, tech wars are something else, huh?