• vzqq@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    23 hours ago

    When making high performance chips, the main figure of merit is how small you can make individual switching elements. Smaller means faster switching but also less energy needed per switch, which in turns means less heat generation etc.

    The smallest transistors can only be made by a specific company in Taiwan, and companies like nvidia and apple compete for every single wafer (unassembled chips) that comes out of that factory. This company sits at the end of a global supply chain: basically these chips can only be made if a bunch of countries all work together. One of the main policy goals of the western allies in the last decade or so has been to shut China out of this industry to prevent them from developing this capability.

    If you don’t have access to the smallest transistors, you are going to have to make some pretty dire trade offs. Slower chips. Fewer cores per chip. That kind of stuff. That’s the problem Huawei is facing: no matter how good of a chip they design, it will always be at a disadvantage unless they can access the technology to make smaller transistors.

    The catch here is that that factory is operating at capacity and big firms are snapping up most supply as soon as/before it hits the market. And that’s before we take into account various sanctions. So for many users, a slower chip that you can get will always beat the fast one that you can’t get.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        21 hours ago

        Just to add to this, the biggest moat Nvidia has is not transistor density, but their software ecosystem.

        Every since like the GTX 200 series in 2008, Nvidia stuff has been the standard for academic research, and it basically only works on their GPUs. Anything for research is done on Nvidia GPUs, which is tweaked for enterprise deployment on GPUs… if you want it on something else, you basically have to start from scratch. And dump a tremendous amount of brainpower put into optimization.

        AMD’s in an interesting position here because they’ve been making Nvidia GPU competitors for literally decades. Their architectures are actually quite similar, hence it’s easier to ‘emulate’ Nvidia on AMD than pretty much anything else.

        …That being said, the Chinese have made tremendous progress busting out of the Nvidia software ecosystem, hence these chips are actually being used for real work.