Also known as @VeeSilverball

  • 1 Post
  • 13 Comments
Joined 1 year ago
cake
Cake day: June 14th, 2023

help-circle

  • I have no plans to support p92 precisely because it’s going to “push” users together as a commodity. What Meta has jurisdiction over is not its communities but rows of data - in the same way that Reddit’s admins have conflicted with its mods, it is inherently not organized in such a way that it can properly represent any specific community or their actions.

    So the cost-benefit from the side of extant fedi is very poor: it won’t operate in a standard way, because it can’t, and the quality of each additional user won’t be particularly worth the pain - most of them will just be confused by being presented with a new space, and if the nature of it is hidden from them it will become an endless misunderstanding.

    If a community using a siloed platform wants to federate, that should be a self-determined thing and they should front the effort to remain on a similar footing to other federated communities. The idea that either side here inherently wants to connect and just “needs a helping hand” is just wrong.


  • I believe there is a healthy relationship between instances and magazines, actually: the way in which topical forums tend to be “hive-mindy” fits well with Fediverse instance culture. The difference is that instead of Reddit-scaling leading in the direction of “locking down” topical discussion to be a bureaucratic game of dancing around every rule, because all users are homogenous - just a name, a score, and a post history - you can have “this board is primarily about this” but then allow in a dose of chaos, affording some privilege to the instance users who already have a set of norms and values in mind and pushing federated comments out of view as needed, where you know the userbases are destined to get into unproductive fights.

    This also combats common influencer strategies applying bots and sockpuppeting, because you’ve already built in the premise of an elite space.

    There’s work needed on the moderation technology of #threadiverse software to achieve this kind of vision, but it’s something that will definitely be learned as we go along.


  • Mastodon’s export portability mostly focuses on the local social-graph aspects(follows, blocks, etc.) and while it has an archive function, people frequently lament losing their old posts and that graph relationship when they move.

    Identity attestment is solvable in a legible fashion with any external mechanism that links back to report “yes, account at xyz.social is real”, and this is already being done by some Mastodon users - it could be through a corporate web site, a self-hosted server or something going across a distributed system(IPFS, Tor, blockchains…) There are many ways to describe identity beyond that, though, and for example, provide a kind of landing page service like linktree to ease browsing different facets of identity or describe “following” in more than local terms.

    I would consider these all high-effort problems to work on since a lot of it has to do with interfaces, UX and privacy tradeoffs. If we aim to archive everything then we have to make an omniscient distributed system, which besides presenting a scaling issue, conflicts with privacy and control over one’s data - so that is probably not the goal. But asking everyone to just make a lot of backups, republish stuff by hand, and set up their own identity service is not right either.



  • If you look at the links by each post, you’ll notice that some will reference a URL that goes off of your local instance. In Lemmy these are icons, in kbin it appears from the “more” link. Sometimes it’s unclear who/where I’m interacting with and examining the URL helps me get some idea of it. In federated social media different instances often develop a different subculture, but since they can access each other you have more dimensions of interaction and how to behave.


  • Not dead, just sleeping. It’s a tougher, higher interest-rate market which cuts out a lot of the gambling behavior. I remain invested but my principle has shifted away from the financial and trad-economic terms to this:

    Blockchains are valuable where they secure valuable information. Therefore, if a blockchain adds more valuable information, it becomes more valuable.

    And that’s it. You don’t have to introduce markets and trading to make the point, but it positions those elements in a supporting role, and gets at one of the most pressing issues of today: where should our sources of truth online start? Blockchains can’t solve the problems of false sensation, reasoning or belief, but they fill in certain technical gaps where we currently rely on handing over custody to someone’s database and hoping nothing happens or they’re too big to fail. It’s just a matter of aligning the applications towards the role of public good, and the air is clear for that right now.





  • The thing about larger-scale architecture is that you can be correct in any specific sense that it’s more than you need, but when you actually try to make the thing across a development team, you end up there because the code reflects the organization, and having it broken up like that lets you more easily rewrite your previous decisions.

    At the small scale this occurs when you notice that the way in which you have to approach a feature is linguistically different - it needs conversion to a substantially different data structure, or an interface that compiles imperative commands from a definition. The whole idea of the database having a general purpose structure and its own query language emerges from that - it lets you defer the question of exactly how you want to use the data. The more configuration you add, the more of those layers you need. When you start supporting enterprise-grade flexibility it gets out of control and you end up with a configuration language that resembles a general purpose programming environment, but worse.

    Casey Muratori talks about this kind of thing in some depth.

    In the end, the point of the code is to help you “arrive in the future” in some sense - it’s instrumental, the point of automating it is to improve the quality of your result by some metric(e.g. fewer errors). For a lot of computations, that means you should just use a spreadsheet - it aids the data entry task, it automates enough of the detail that you can get things done, but it also gets out of the way instead of turning into a professionalized project.


  • What always helped centralized social was an environment of rapid growth. For the majority of people there wasn’t a “before” to compare to whatever they signed up to, so a play like the one Reddit made, which isn’t about the quality of the content but “whatever gets people in the door”, worked - focusing all your energy on hypergrowth was the Web 2.0 strategy. But my own “before” goes back to browsing Usenet over a dial-up shell account(terminal access only). The technology used then was primarily characterized by being efficient to store and process, which led to a federated model that shared text threads.

    The reason people switched from Usenet to early web forums was also a combination of not having a “before”, plus some new conveniences. Usenet moderation tools were very limited, ensuring that spam and derangement were common. Because the design was made just for text, you didn’t have image-focused content, but you also didn’t experience the things images get moderated for now - you could post a UU-encoded file that contained an image, or a link to an image, but you couldn’t shove it in people’s face. And tree quoting replies was normalized, if rather disorganized - long-running threads often got “forked”.

    The model of web forums that became most popular - flat topic threads, more images, centralized moderation - caused as many issues as it solved. Flat threading with no post ranking makes people reply “first” at the top of the thread, images create a whole attack surface, and centralized mods have more power to trip on. But they could provide a better experience along the narrow set of things they wanted the forum to be about, and that made all the difference. That’s how the centralized model works. When I think of places like Something Awful or Newgrounds in their original heyday - it’s really gatekeepy stuff. There were tastemakers and you followed their lead or else.

    Reddit started with a lot of link aggregation, which was also Digg’s thing - that model “pushes” more content than a regular forum, so it helps build broad-audience engagement. But Reddit added more Usenet-like elements, and those gradually took over a lot of the niches as more people started using Reddit to ask questions and make statements addressing a specific community.

    Something that I think defines the federated space is that there is less “push”. The power is more distributed, fewer gates to keep. Reddit represented those values for a while, and now it obviously doesn’t, so the users who were there for that are going to drift this way very quickly.


  • Part of what propelled Digg to stardom was the desire for a central “town square” that didn’t yet exist in the 2000’s, Web-centric internet. (never mind that Usenet existed - it didn’t have a lot of the conveniences of web forums and had gotten overrun with spam, so it just wasn’t part of the discussion). There were a few larger, topic-centric sites like Slashdot, Something Awful, Fark, Newgrounds, etc. These older sites had various limits on user submissions and barriers to entry, in part because it was out of their scope to try to do more than that.

    Digg hit on the combination of user-submitted content, simple voting interface, and secret algorithm that has defined most of Web 2.0 - but spam, moderation and power users were always an issue, and the best answer anyone seems to have had to it is “decentralize more”, which Reddit did some of by splitting things out into topical feeds again, but unifying the login and access to all of them and letting users self-appoint as moderators - in essence, give power users their own fiefdoms to keep the peace. Twitter likewise absorbed some Digg users because it relied a lot on user self-moderation of their feed. Other platforms went down the path of having the algorithm do more of the moderation and becoming more TV-like, which is more profitable but volatile since that makes the platform blameworthy for everything that slips through.

    So, what I feel has happened since is mostly intensification brought on by being for-profit and taking investment capital, unlike some of those older sites which are still around and kicking. It’s hard to resist changing your business model towards profit maximization when you’ve taken a lot of investment. But then, the useful service that Reddit was providing when it launched is a commodity now, and with federated social media, the power dynamics are even more diffused.

    But every time this happens, there are people who want to stay behind, and that’s because power dynamics aren’t uniformly agreed upon. Some people don’t want it to be objectively challenging to hold power, they just want a game they can win.


  • I’ve had some thoughts on, essentially, doing more of what historically worked; a mix of “archival quality materials” and “incentives for enthusiasts”. If we only focus on accumulating data like IA does, it is valuable, but we soak up a lot of spam in the process, and that creates some overwhelming costs.

    The materials aspect generally means pushing for lower fidelity, uncomplicated formats, but this runs up against what I call the “terrarium problem”: to preserve a precious rare flower exactly as is, you can’t just take a picture, you have to package up the entire jungle. Like, we have emulators for old computing platforms, and they work, but someone has to maintain them, and if you wanted to write something new for those platforms, you are most likely dealing with a “rest of the software ecosystem” that is decades out of date. So I believe there’s an element to that of encoding valuable information in such a way that it can be meaningful without requiring the jungle - e.g. viewing text outside of its original presentation. That tracks with humanity’s oldest stories and how they contain some facts that survived generations of retellings.

    The incentives part is tricky. I am crypto and NFT adjacent, and use this identity to participate in that unabashedly. But my view on what it’s good for has shifted from the market framing towards examination of historical art markets, curation and communal memory. Having a story be retold is our primary way of preserving it - and putting information on-chain(like, actually on-chain. The state of the art in this can secure a few megabytes) creates a long-term incentive for the chain to “retell its stories” as a way of justifying its valuation. It’s the same reason as why museums are more than “boring old stuff”.

    When you go to a museum you’re experiencing a combination of incentives: the circumstances that built the collection, the business behind exhibiting it to the public, and the careers of the staff and curators. A blockchain’s data is a huge collection - essentially a museum in the making, with the market element as a social construct that incentivizes preservation. So I believe archival is a thing blockchains could be very good at, given the right framing. If you like something and want it to stay around, that’s a medium that will be happy to take payment to do so.