Sorry if I’m not the first to bring this up. It seems like a simple enough solution.

  • JackGreenEarth@lemm.ee
    link
    fedilink
    arrow-up
    29
    ·
    edit-2
    1 year ago

    What other company besides AMD makes GPUs, and what other company makes GPUs that are supported by machine learning programs?

      • jon@lemmy.tf
        link
        fedilink
        English
        arrow-up
        9
        ·
        1 year ago

        AMD has ROCm which tries to get close. I’ve been able to get some CUDA applications running on a 6700xt, although they are noticeably slower than running on a comparable NVidia card. Maybe we’ll see more projects adding native ROCm support now that AMD is trying to cater to the enterprise market.

        • Turun@feddit.de
          link
          fedilink
          arrow-up
          5
          ·
          1 year ago

          They kinda have that, yes. But it was not supported on windows until this year and is in general not officially supported on consumer graphics cards.

          Still hoping it will improve, because AMD ships with more VRAM at the same price point, but ROCm feels kinda half assed when looking at the official support investment by AMD.

        • meteokr@community.adiquaints.moe
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          I don’t own any nvidia hardware out of principal, but ROCm is no where even close to cuda as far as mindshare goes. At this point I rather just have a cuda->rocm shim I can use, in the same was as directx->vulkan does with proton. Trying to fight for mindshare sucks, so trying to get every dev to support it just feel like a massive uphill battle.

    • Dudewitbow@lemmy.ml
      link
      fedilink
      arrow-up
      11
      ·
      1 year ago

      AMD supports ML, its just a lot of smaller projects are made with CUDA backends, and dont have developers there to switch from CUDA to OpenCL or similar.

      Some of the major ML libraries that used to built around CUDA like Tensorflow has already made non CUDA branches, but thats only because tensorflow is open source, ubiquitous in the scene and litterally has google behind it.

      ML for more niche uses basically is in the chicken and egg situation. People wont use other gpus for ML because theres no dev working on non CUDA backends. No ones working on non CUDA backends because the devs end up buying Nvidia, which is basically what Nvidia wants.

      There are a bunch of followers but a lack in of leaders to move the direction in a more open compute environment.

      • PlatinumSf@pawb.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Huh, my bad. I was operating off of old information. They’ve actually already released the sdk and apis I was referring to.

    • coffeetest@kbin.social
      link
      fedilink
      arrow-up
      10
      ·
      1 year ago

      My Intel Arc 750 works quite well at 1080 and is perfectly sufficient for me. If people need hyper refresh rates and resolution and all all the bells well then have fun paying for it. But if you need functional, competent gaming, at US$200 Arc is nice.

    • PlatinumSf@pawb.social
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      No joke, probably intel. The cards won’t hold a candle to a 4090 but they’re actually pretty decent for both gaming and ML tasks. AMD definitely needs to speed up the timeline on their new ML api tho.

      • JoeCoT@kbin.social
        link
        fedilink
        arrow-up
        6
        ·
        1 year ago

        Problem with Intel cards is that they’re a relatively recent release, and not very popular yet. It’s going to be a while before games optimize for them.

        For example, the ARC cards aren’t supported for Starfield. Like they might run but not as well as they could if Starfield had optimized for them too. But the card’s only been out a year.

        • Luna@lemmy.catgirl.biz
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          The more people use Arc the quicker it becomes mainstream and optimised for but arc is still considered “beta” and slow in peoples minds even though there were huge improvements and the old benchmarks don’t hold any value anymore. chicken and Egg problem. :/

          Disclaimer: i have an arc 770 16GB because every other sensible upgrade path would have cost 3x-4x more for the same performance uplift (and I’m not buying an 8GB card in 2023+) but now I’m starting to get really angry at people blaming Intel for “not supporting this new game” - all that gpus should support is the graphics API to the letter of the specification, all this day-1 patching and driver hotfixes to make games run decent is bs. Games need to feed the API and GPUs need to process what the API tells it to, nothing more nothing less. It’s a complex issue and i think Nvidia held the monopoly for too long, everything is optimised for Nvidia at the cost of making it worse for everyone else.

          • danA
            link
            fedilink
            English
            arrow-up
            6
            ·
            edit-2
            1 year ago

            Isn’t the entire point of DirectX and OpenGL that it abstracts away the GPU-specific details? You write code once and it works on any graphics card that supports the standard? It sounds like games are moving towards what we had in the old days, where they have specific code per graphics card?

            • Luna@lemmy.catgirl.biz
              link
              fedilink
              English
              arrow-up
              4
              ·
              1 year ago

              I think the issue started with gpu-architecture tailored technologies like physx or gameworks but im probably wrong. For example I have nothing against physx but it only runs on nvidia cores natively (fast), i have an issue when there’s a monetary incentive or exclusive partnering of nvidia and game studios - so if you want to play the game with all the features, bells and whistles, it was designed with you would need to also buy their overpriced (and current gen: underperforming) gpus just because you’d be missing out on features or performance on any other gpu architecture.

              If this trend continues everybody will need a €1k+ gpu from nvidia and a €1k+ gpu from AMD and hot-swap between them depending on what game you wish to play.

    • Erdrick@beehaw.org
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      I jumped to team red this build.
      I have been very happy with my 7900XTX.
      4K max settings / FPS on every game I’ve thrown at it.
      I don’t play the latest games, so I guess I could hit a wall if I play the recent AAA releases, but many times they simply don’t interest me.