A well backed as usual peice by Benn Jordan on the basics of how misinformation farms work according to their own internal documentation, the goal of creating a post truth world, and why a sizable percentage of twitter users start talking about OpenAi’s terms of service every time they update it.

  • danA
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    3 hours ago

    the fediverse seems to be far more resilient against bots, since we can defederate from an instance that gets taken over,

    It’s very easy to spin up a new instance though, so I’m surprised there’s not a lot of spam. AFAIK most servers still federate with any new servers by default as soon as a user on the new server subscribes to a person/community on an existing server. That’s important to ensure equal treatment and that new servers are not disadvantaged, but it can also have issues.

    • ProdigalFrog@slrpnk.net
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 hours ago

      The Fediseer project from @db0@lemmy.dbzer0.com helps prevent bot farms from proliferating, as new servers require an endorsement from an already trusted instance to become ‘legit’. And they can be marked as untrustworthy as well, causing them to be defederated fairly quickly, limiting its reach.

      We also have a MUCH higher moderator to user ratio compared to corpo sites, with a range between 100 to 2,500 users per mod depending on instance, Vs. 250,000 users per mod on sites like twitter, so we can more adequately spot and deal with spam on the network.

      • danA
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        3 hours ago

        Thanks for the info! I didn’t know about the Fediseer project - I don’t think it existed when I created my Mastodon and Lemmy servers, or I just wasn’t aware of it.