• 0 Posts
  • 1.2K Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle


  • Well even with your observation, it could well be losing share to Mac and Linux. The Windows users are more likely to jump ship, and Mac and Linux users tend to stick with the platform more, mainly because it’s not actively working to piss them off. Even if zero jump to Mac or Linux, the share could still shift.

    The upside of ‘just a machine to run a browser’ is that it’s easier than ever to live with Linux desktop, since that nagging application or two that keeps you on Windows has likely moved to browser hosted anyway. Downside of course being that it’s much more likely that app extracts a monthly fee from you instead of ‘just buying it’.

    Currently for work I’m all Linux, precisely because work was forced to buy Office365 anyway, and the web versions work almost as well as the desktop versions for my purposes (I did have to boot Windows because I had to work on a Presentation and the weird ass “master slide” needed to be edited, and for whatever reason that is not allowed on the web). VSCode natively supports linux (well ‘native’, it’s a browser app disguised as a desktop app), but I would generally prefer Kate anyway (except work is now tracking our Github Copilot usage, and so I have to let Copilot throw suggestions at me to discard in VSCode or else get punished for failing to meet stupid objectives).


  • “Agentic” is the buzzword to distinguish “LLM will tell you how to do it” versus “LLM will just execute the commands it thinks are right”.

    Particularly if a process is GUI driven, Agentic is seen as a more theoretically useful approach since a LLM ‘how-to’ would still be tedious to walk through yourself.

    Given how LLM usually mis-predicts and doesn’t do what I want, I’m no where near the point where I’d trust “Agentic” approaches. Hypothetically if it could be constrained to a domain where it can’t do anything that can’t trivially be undone, maybe, but given for example a recent VS Code issue where it turned out the “jail” placed around Agentic operations turned out to be ineffective, I’m not thinking too much of such claimed mitigations.


  • My career is supporting business Linux users, and to be honest I can see why people might be reluctant to take on the Linux users.

    “Hey, we implemented a standard partition scheme that allocates almost all our space to /usr and /var, your installer using ‘/opt’ doesn’t give us room to work with” versus “Hey, your software went into /usr/local, but clearly the Linux filesystem standard is for such software to go into /opt”. Good news is that Linux is flexible and sometimes you can point out “you can bind mount /opt to whatever you want” but then some of them will counter “that sounds like too much of a hack, change it the way we want”. Now this example by itself is mostly simple enough, make this facet configurable. But rinse and repeat for just an insane amount of possible choices. Another group at my company supports Linux, but just as a whole virtual machine provided by the company, the user doesn’t get to pick the distribution or even access bash on the thing, because they hate the concept of trying to support linux users.

    Extra challenge, supporting an open source project with the Linux community. “I rewrote your database backend to force all reads to be aligned at 16k boundaries because I made a RAID of 4k disks and think 16k alignment would work really well with my storage setup, but ended up cramming up to 16k of garbage into some results and I’m going to complain about the data corruption and you won’t know about my modification until we screen share and you try to trace and see some seeks that don’t make sense”.



  • I think a key difference is that firefox is a eternally evolving codebase that has to do new stuff frequently. It may have been painful but it’s worth it to bite the bullet for the sake of the large volume of ongoing changes.

    For sudo/coreutils, I feel like those projects are more ‘settled’ and unlikely to need a lot of ongoing work, so the risk/benefit analysis cuts a different way.


  • jj4211@lemmy.worldtolinuxmemes@lemmy.world🦀🦀🦀🦀🦀
    link
    fedilink
    arrow-up
    9
    arrow-down
    5
    ·
    2 days ago

    It’s more like saying “why tear down that house and try to build one just like it in the same spot?”

    So the conversation goes:

    “when it was first built, it had asbestos and lead paint and all sorts of things we wouldn’t do today”

    “but all that was already fixed 20 years ago, there’s nothing about it’s construction that’s really known to be problematic anymore”

    “But maybe one day they’ll decide copper plumbing is bad for you, and boy it’ll be great that it was rebuilt with polybutylene plumbing!”

    Then after the house is built it turns out that actually polybutylene was a problem, and copper was just fine".


  • If you had an ancient utility in assembly that did exactly what you wanted and no particular issues, then it would have been a dubious decision to rewrite in C.

    Of course, the relative likelyhood of assembly code actually continuing to function across the evolution of processor instruction sets is lower than C, so the scenario is a bit trickier to show in that example.

    However, there are two much more striking examples: COBOL continues to be used in a lot of applications. Because the COBOL implementations work and while it would be insane to choose COBOL for them now if they were to start today, it’s also insane to rewrite them and incur the risks when they work fine and will continue working.

    Similarly, in scientific computing there’s still a good share of Fortran code. Again, an insane choice for a new project, but if the implementation is good, it’s a stupid idea to rewrite.

    There’s not a lot of reason to criticisize the technical merits of Rust here, nor even to criticize people for choosing Rust as the path forward on their project. However the culture of ‘every thing must be rewritten in Rust’ is something worthy of criticism.


  • I think the criticism is more about deciding to try to re-implement a long standing facility in rust that has, by all accounts, been ‘finished’ for a long time.

    About the only argument for those sorts of projects is the resistance to the sorts of bugs that can become security vulnerabilities, and this example highlights that rewrites in general (rust or otherwise) carry a risk of introducing all new security issues on their own, and that should be weighed against the presumed risks of not bothering to rewrite in the first place.

    New projects, heavy feature development, ok, fine, Rust to make that easier. Trying to start over to get to the same place you already are, needs a bit more careful consideration, especially if the codebase in question has been scrutinized to death, even after an earlier reputation of worrisome CVEs that had since all been addressed.


  • jj4211@lemmy.worldtolinuxmemes@lemmy.world🦀🦀🦀🦀🦀
    link
    fedilink
    arrow-up
    23
    arrow-down
    1
    ·
    2 days ago

    I would argue a rewrite of sudo in rust is not necessarily a good thing.

    Sure, if you are starting from scratch, Rust is likely to mitigate mistakes that C would make into vulnerabilities.

    When you rewrite anything, there’s just a lot of various sorts of risks. For sudo and coreutils, I’m skeptical that there are sufficient unknown, unaddressed problems in the C codebases of such long lived, extremely scrutinized projects to be worth the risks of a rewrite.

    A rust rewrite may be indicated for projects that are less well scrutinized due to no one bothering or not being that old anyway. Just the coreutils and sudo are in my mind the prime examples of bad ideas of rewrite just for the sake of rust rewrite.


  • Except he directly said just that.

    Generally I agree that often he’ll make some flub and a bigger deal is made of it. Like with the ‘Miracle Mile’ vs. ‘Maginficent Mile’ thing, he said the wrong thing but that’s the least of the problems with that story and a fairly mundane and understandable mistake to make.

    This time the statement is exactly as said, though real world consequences for it are similarly low.


  • People’s laziness?

    Well yes, that is a huge one. I know people who when faced with Google’s credible password suggestion say “hell no, I could never remember that”, then proceed to use a leet-speak thinking computers can’t guess those because of years of ‘use a special character to make your password secure’. People at work giving their password to someone else to take care of someething because everything else is a pain and the stakes are low to them. People being told their bank is using a new authentication provider and so they log dutifully into the cited ‘auth provider’, because this is the sort of thing that (generally not banks) do to people.

    to an extent

    Exactly, it mitigates, but still a gap. If they phish for your bank credential, you give them your real bank password. It’s unique, great, but the only thing the attacker wanted was the bank password anyway. If they phish a TOTP, then they have to make sure they use it within a minute, but it can be used.

    actually destroys any additional security added by 2fa

    From the user perspective that knows they are using machine generated passwords, yes, that setup is redundant. However from the service provider perspective, that has no way of enforcing good password hygiene, then at least gives the service provider control over generating the secret. Sure a ‘we pick the password for the user’ would get to the same end, but no one accepts that.

    But this proves that if you are fanatical about MFA, then TOTP doesn’t guarantee it anyway, since the secret can be stuffed into a password manager. Passkey has an ecosystem more affirmatively trying to enforce those MFA principles, even if it is, ultimately, generally in the power of the user to overcome them if they were so empowered (you can restrict to certain vendor keys, but that’s not practical for most scenarios).

    My perspective is that MFA is overblown and mostly fixes some specific weaknesses: -“Thing you know” largely sucks as a factor, if I human can know it, then a machine can guess it, and on the service provider there’s so much risk that such a factor can be guessed at a faster rate than you want, despite mitigations. Especially since you generally let a human select the factor in the first place. It helps mitigate the risk of a lost/stolen badge on a door by also requiring a paired code in terms of physical security, but that’s a context where the building operator can reasonably audit attempts at the secret, which is generally not the case for online services as well. So broadly speaking, the additional factor is just trying to mitigate the crappy nature of “thing you know” -“Thing you have” used to be easier to lose track of or get cloned. A magstripe badge gets run through a skimmer, and that gets replicated. A single-purpose security card gets lost and you don’t think about it because you don’t need it for anything else. The “thing you have” nowadays is likely to lock itself and require local unlocking, essentially being the ‘second factor’ enforced client side. Generally Passkey implementations require just that, locally managed ‘second factor’.

    So broadly ‘2fa is important’ is mostly ‘passwords are bad’ and to the extent it is important, Passkeys are more likely to enforce it than other approaches anyway.


  • Ok, I’ll concede that Chrome makes Google a relatively more popular password manager than I considered, and it tries to steer users toward generated passwords that are credible. Further by being browser integrated, it mitigates some phishing by declining to autofill with the DNS or TLS situation is inconsistent. However I definitely see people discard the suggestions and choose a word and think ‘leet-speak’ makes it hard (“I could never remember that, I need to pick something I remember”). Using it for passwords still means the weak point is human behavior (in selecting the password, in opting not to reuse the password, and in terms of divulging it to phishing attempt).

    If you ascribe to Google password manager being a good solution, it also handles passkeys. That removes the ‘human can divulge the fundamental secret that can be reused’ while taking full advantage of the password manager convenience.


  • Password managers are a workaround, and broadly speaking the general system is still weak because password managers have relatively low adoption and plenty of people are walking around with poorly managed credentials. Also doesn’t do anything to mitigate a phishing attack, should the user get fooled they will leak a password they care about.

    2FA is broad, but I’m wagering you specifically mean TOTP, numbers that change based on a shared secret. Problems there are: -Transcribing the code is a pain -Password managers mitigate that, but the most commonly ‘default’ password managers (e.g. built into the browser) do nothing for them -Still susceptible to phishing, albeit on a shorter time scale

    Pub/priv key based tech is the right approach, but passkey does wrap it up with some obnoxious stuff.


  • Passkeys are a technology that were surpassed 10 years before their introduction

    Question is by what? I could see an argument that it is an overcomplication of some ill-defined application of x509 certificates or ssh user keys, but roughly they all are comparable fundamental technologies.

    The biggest gripe to me is that they are too fussy about when they are allowed and how they are stored rather than leaving it up to the user. You want to use a passkey to a site that you manually trusted? Tough, not allowed. You want to use against an IP address, even if that IP address has a valid certificate? Tough, not allowed.


  • Again, they should have called the police with juriscidtion if that were the case. They should have, at most, detained him on scene until cops show up.

    So far I’ve seen:

    • They pulled into a car and then violently arrested the driver because “she rammed their vehicle” despite footage clearly showing they drove into hers. They didn’t want to get in trouble for causing an accident so they just made stuff up.
    • Even in the sandwich “attack” they asserted that the sandwich contents covered their vest, but footage showed it stayed in the wrapper the whole time.

    They clearly are cultivating a culture of make stuff up to blame the people they get mad at. They have zero credibility.



  • I heard this report where they went to a charity food pantry in deeply Trump territory to get their perspective on the whole benefits being stopped.

    This woman talked about how it was a good thing for SNAP and everything like it to go away, people need to take care of themselves. Immediately recognizing that it was an odd thing for her to say, since she was there to get food from the charity, she clarifies “I take care of myself and don’t need a handout, I’m just here because I might like some of the food for myself”.

    This woman didn’t want people thinking of her as in need and thought it sounded better if she was taking food away from the poorer people…

    This is how Trump still carries like 40% of the population approving of him.


  • Yeah, but can they handle the collapse of going back to the company before the AI boom? They’ve increased in market cap 5000%, attracted a lot of stakeholders that never would have bothered with nVidia if not for the LLM boom. If LLM pops, then will nVidia survive with their new set of stakeholders that didn’t sign up for a ‘mere graphics company’?

    They’ve reshaped their entire product strategy to be LLM focused. Who knows what the demand is for their current products without the LLM bump. Discrete GPUs were becoming increasingly niche since ‘good enough’ integrated GPUs kind of were denting their market.

    They could survive a pop, but they may not have the right backers to do so anymore…