• DominusOfMegadeus@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    22 hours ago

    Let me rephrase my question: Please fall over yourself to explain to me why these devices are no match for Nvidia’s top cards like the H100/B100. I wish to understand.

    Cheers

    • vzqq@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      20 hours ago

      When making high performance chips, the main figure of merit is how small you can make individual switching elements. Smaller means faster switching but also less energy needed per switch, which in turns means less heat generation etc.

      The smallest transistors can only be made by a specific company in Taiwan, and companies like nvidia and apple compete for every single wafer (unassembled chips) that comes out of that factory. This company sits at the end of a global supply chain: basically these chips can only be made if a bunch of countries all work together. One of the main policy goals of the western allies in the last decade or so has been to shut China out of this industry to prevent them from developing this capability.

      If you don’t have access to the smallest transistors, you are going to have to make some pretty dire trade offs. Slower chips. Fewer cores per chip. That kind of stuff. That’s the problem Huawei is facing: no matter how good of a chip they design, it will always be at a disadvantage unless they can access the technology to make smaller transistors.

      The catch here is that that factory is operating at capacity and big firms are snapping up most supply as soon as/before it hits the market. And that’s before we take into account various sanctions. So for many users, a slower chip that you can get will always beat the fast one that you can’t get.

        • brucethemoose@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          18 hours ago

          Just to add to this, the biggest moat Nvidia has is not transistor density, but their software ecosystem.

          Every since like the GTX 200 series in 2008, Nvidia stuff has been the standard for academic research, and it basically only works on their GPUs. Anything for research is done on Nvidia GPUs, which is tweaked for enterprise deployment on GPUs… if you want it on something else, you basically have to start from scratch. And dump a tremendous amount of brainpower put into optimization.

          AMD’s in an interesting position here because they’ve been making Nvidia GPU competitors for literally decades. Their architectures are actually quite similar, hence it’s easier to ‘emulate’ Nvidia on AMD than pretty much anything else.

          …That being said, the Chinese have made tremendous progress busting out of the Nvidia software ecosystem, hence these chips are actually being used for real work.

    • Truscape@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      2
      ·
      20 hours ago

      He’s arguing that comparing raw performance is moot in his comment, since having affordable/available supply that can undercut NVIDIA in the same role would be quite a blow to their market dominance (especially outside the US).