• AnyOldName3
    link
    fedilink
    English
    74 months ago

    The archive sites used to use the API, which is another reason they wanted to get rid of it. I always found they were a great moderation tool as users would always edit their posts to no longer break the rules before they claimed a rogue moderator had banned them for no reason, and there was no way within reddit to prove them wrong.

    • @danA
      link
      English
      1
      edit-2
      4 months ago

      What about archive sites like web.archive.org and archive.today? Both still work fine for Reddit posts, and neither are blocked in www.reddit.com/robots.txt, so so far they haven’t shown an intent to block them.

      • AnyOldName3
        link
        fedilink
        English
        24 months ago

        Yeah, the Wayback Machine doesn’t use Reddit’s API, but on the other hand, I’m pretty sure they don’t automatically archive literally everything that makes it onto Reddit - doing that would require the API to tell you about every new post, as just sorting /r/all by new and collecting every link misses stuff.

        • @danA
          link
          English
          24 months ago

          You don’t need every post, just a collection big enough to train an AI on. I imagine it’s a lot easier to get data from the Internet Archive (whose entire mission is historical preservation) than from Reddit.

          The thing I’m not sure about is licensing, but it seems like that’d the case for the whole AI industry at the moment.