I write about technology at theluddite.org

  • 1 Post
  • 22 Comments
Joined 1 year ago
cake
Cake day: June 7th, 2023

help-circle

  • Totally agreed. I didn’t mean to say that it’s a failure if it doesn’t properly encapsulate all complexity, but that the inability to do so has implications for design. In this specific case (as in many cases), the error they’re making is that they don’t realize the root of the problem that they’re trying to solve lies in that tension.

    The platform and environment are something you can shape even without an established or physical community.

    Again, couldn’t agree more! The platform is actually extremely powerful and can easily change behavior in undesirable ways for users, which is actually the core thesis of that longer write up that I linked. That’s a big part of where ghosting comes from in the first place. My concern is that thinking you can just bolt a new thing onto the existing model is to repeat the original error.


  • This app fundamentally misunderstands the problem. Your friend sets you up on a date. Are you going to treat that person horribly. Of course not. Why? First and foremost, because you’re not a dick. Your date is a human being who, like you, is worthy and deserving of basic respect and decency. Second, because your mutual friendship holds you accountable. Relationships in communities have an overlapping structure that mutually impact each other. Accountability is an emergent property of that structure, not something that can be implemented by an app. When you meet people via an app, you strip both the humanity and the community, and with it goes the individual and community accountability.

    I’ve written about this tension before: As we use computers more and more to mediate human relationships, we’ll increasingly find that being human and doing human things is actually too complicated to be legible to computers, which need everything spelled out in mathematically precise detail. Human relationships, like dating, are particularly complicated, so to make them legible to computers, you necessarily lose some of the humanity.

    Companies that try to whack-a-mole patch the problems with that will find that their patches are going to suffer from the same problem: Their accountability structure is a flat shallow version of genuine human accountability, and will itself result in pathological behavior. The problem is recursive.


  • Journalists actually have very weird and, I would argue, self-serving standards about linking. Let me copy paste from an email that I got from a journalist when I emailed them about relying on my work but not actually citing it:

    I didn’t link directly to your article because I wasn’t able to back up some of the claims made independently, which is pretty standard journalistic practice

    In my opinion, this is a clever way to legitimize passing off research as your own, which is definitely what they did, up to and including repeating some very minor errors that I made.

    I feel similarly about journalistic ethics for not paying sources. That’s a great way to make sure that all your sources are think tank funded people who are paid to have opinions that align with their funding, which is exactly what happens. I understand that paying people would introduce challenges, but that’s a normal challenge that the rest of us have to deal with every fucking time we hire someone. Journalists love to act like people coming forth claiming that they can do X or tell them about Y is some unique problem that they face, when in reality it’s just what every single hiring process exists to sort out.


  • Investment giant Goldman Sachs published a research paper

    Goldman Sachs researchers also say that

    It’s not a research paper; it’s a report. They’re not researchers; they’re analysts at a bank. This may seem like a nit-pick, but journalists need to (re-)learn to carefully distinguish between the thing that scientists do and corporate R&D, even though we sometimes use the word “research” for both. The AI hype in particular has been absolutely terrible for this. Companies have learned that putting out AI “research” that’s just them poking at their own product but dressed up in a science-lookin’ paper leads to an avalanche of free press from lazy credulous morons gorging themselves on the hype. I’ve written about this problem a lot. For example, in this post, which is about how Google wrote a so-called paper about how their LLM does compared to doctors, only for the press to uncritically repeat (and embellish on) the results all over the internet. Had anyone in the press actually fucking bothered to read the paper critically, they would’ve noticed that it’s actually junk science.


  • Sounds very doable! My friend has an old claw foot tub that he lights a fire under. If you want something a little less country, you can buy on demand electric or propane water heaters and hook your hose up, though I’d expect the electric one wouldn’t be able to keep up at 120v. Hardest part of this project is probably moving the tub. I say go for it!




  • I know that this kind of actually critical perspective isn’t point of this article, but software always reflects the ideology of the power structure in which it was built. I actually covered something very similar in my most recent post, where I applied Philip Agre’s analysis of the so-called Internet Revolution to the AI hype, but you can find many similar analyses all over STS literature, or throughout just Agre’s work, which really ought to be required reading for anyone in software.

    edit to add some recommendations: If you think of yourself as a tech person, and don’t necessarily get or enjoy the humanities (for lack of a better word), I recommend starting here, where Agre discusses his own “critical awakening.”

    As an AI practitioner already well immersed in the literature, I had incorporated the field’s taste for technical formalization so thoroughly into my own cognitive style that I literally could not read the literatures of nontechnical fields at anything beyond a popular level. The problem was not exactly that I could not understand the vocabulary, but that I insisted on trying to read everything as a narration of the workings of a mechanism. By that time much philosophy and psychology had adopted intellectual styles similar to that of AI, and so it was possible to read much that was congenial – except that it reproduced the same technical schemata as the AI literature. I believe that this problem was not simply my own – that it is characteristic of AI in general (and, no doubt, other technical fields as well). T





  • I completely and totally agree with the article that the attention economy in its current manifestation is in crisis, but I’m much less sanguine about the outcomes. The problem with the theory presented here, to me, is that it’s missing a theory of power. The attention economy isn’t an accident, but the result of the inherently political nature of society. Humans, being social animals, gain power by convincing other people of things. From David Graeber (who I’m always quoting lol):

    Politics, after all, is the art of persuasion; the political is that dimension of social life in which things really do become true if enough people believe them. The problem is that in order to play the game effectively, one can never acknowledge this: it may be true that, if I could convince everyone in the world that I was the King of France, I would in fact become the King of France; but it would never work if I were to admit that this was the only basis of my claim.

    In other words, just because algorithmic social media becomes uninteresting doesn’t mean the death of the attention economy as such, because the attention economy is something innate to humanity, in some form. Today its algorithmic feeds, but 500 years ago it was royal ownership of printing presses.

    I think we already see the beginnings of the next round. As an example, the YouTuber Veritsasium has been doing educational videos about science for over a decade, and he’s by and large good and reliable. Recently, he did a video about self-driving cars, sponsored by Waymo, which was full of (what I’ll charitably call) problematic claims that were clearly written by Waymo, as fellow YouTuber Tom Nicholas pointed out. Veritasium is a human that makes good videos. People follow him directly, bypassing algorithmic shenanigans, but Waymo was able to leverage their resources to get into that trusted, no-algorithm space. We live in a society that commodifies everything, and as human-made content becomes rarer, more people like Veritsaium will be presented with more and increasingly lucrative opportunities to sell bits and pieces of their authenticity for manufactured content (be it by AI or a marketing team), while new people that could be like Veritsaium will be drowned out by the heaps of bullshit clogging up the web.

    This has an analogy in our physical world. As more and more of our physical world looks the same, as a result of the homogenizing forces of capital (office parks, suburbia, generic blocky bulidings, etc.), the fewer and fewer remaining parts that are special, like say Venice, become too valuable for their own survival. They become “touristy,” which is itself a sort of ironically homogenized commodified authenticity.

    edit: oops I got Tom’s name wrong lol fixed



  • I cannot handle the fucking irony of that article being on nature, one of the organizations most responsible for fucking it up in the first place. Nature is a peer-reviewed journal that charges people thousands upon thousands of dollars to publish (that’s right, charges, not pays), asks peer reviewers to volunteer their time, and then charges the very institutions that produced the knowledge exorbitant rents to access it. It’s all upside. Because they’re the most prestigious journal (or maybe one of two or three), they can charge rent on that prestige, then leverage it to buy and start other subsidiary journals. Now they have this beast of an academic publishing empire that is a complete fucking mess.



  • I have worked at two different start ups where the boss explicitly didn’t want to hire anyone with kids and had to be informed that there are laws about that, so yes, definitely anti-parent. One of them also kept saying that they only wanted employees like our autistic coworker when we asked him why he had spent weeks rejecting every interviewee that we had liked. Don’t even get me started on people that the CEO wouldn’t have a beer with, and how often they just so happen to be women or foreigners! Just gross shit all around.

    It’s very clear when you work closely with founders that they see their businesses as a moral good in the world, and as a result, they have a lot of entitlement about their relationship with labor. They view laws about it as inconveniences on their moral imperative to grow the startup.


  • This has been ramping up for years. The first time that I was asked to do “homework” for an interview was probably in 2014 or so. Since then, it’s gone from “make a quick prototype” to assignments that clearly take several full work days. The last time I job hunted, I’d politely accept the assignment and ask them if $120/hr is an acceptable rate, and if so, I can send over the contract and we can get started ASAP! If not, I refer them to my thousands upon thousands of lines of open source code.

    My experience with these interactions is not that they’re looking for the most qualified applicants, but that they’re filtering for compliant workers who will unquestioningly accept the conditions offered in exchange for the generally lucrative salaries. It’s the kind of employees that they need to keep their internal corporate identity of being the good guys as tech goes from being universally beloved to generally reviled by society in general.



  • AI systems in the future, since it helps us understand how difficult they might be to deal with," lead author Evan Hubinger, an artificial general intelligence safety research scientist at Anthropic, an AI research company, told Live Science in an email.

    The media needs to stop falling for this. This is a “pre-print,” aka a non-peer-reviewed paper, published by the AI company itself. These companies are quickly learning that, with the AI hype, they can get free marketing by pretending to do “research” on their own product. It doesn’t matter what the conclusion is, whether it’s very cool and going to save us or very scary and we should all be afraid, so long as its attention grabbing.

    If the media wants to report on it, fine, but don’t legitimize it by pretending that it’s “researchers” when it’s the company itself. The point of journalism is to speak truth to power, not regurgitate what the powerful say.


  • Whenever one of these stories come up, there’s always a lot of discussion about whether these suits are reasonable or fair or whether it’s really legally the companies’ fault and so on. If that’s your inclination, I propose that you consider it from the other side: Big companies use every tool in their arsenal to get what they want, regardless of whether it’s right or fair or good. If we want to take them on, we have to do the same. We call it a justice system, but in reality it’s just a fight over who gets to wield the state’s monopoly of violence to coerce other people into doing what they want, and any notions of justice or fairness are window dressing. That’s how power actually works. It doesn’t care about good faith vs bad faith arguments, and we can’t limit ourselves to only using our institutions within their veneer of rule of law when taking on powerful, exclusively self-interested, and completely antisocial institutions with no such scruples.