Federated services have always had privacy issues but I expected Lemmy would have the fewest, but it’s visibly worse for privacy than even Reddit.
- Deleted comments remain on the server but hidden to non-admins, the username remains visible
- Deleted account usernames remain visible too
- Anything remains visible on federated servers!
- When you delete your account, media does not get deleted on any server
I’m at a loss. You’re saying that things that you said publicly are private? Or you’re saying that they become private because you delete your account? Assume you dox someone. I need to find out if that happened. As an admin I’d be able to see that
I would need to be able to provide this to authorities if they provided needed legal documentation. Why do you think that privacy dictates you should be able to commit a crime, and get away with it by deleting your account?
I don’t think there is a legal requirement that you store that data, just that you make the data you store available, or in some situations, you add logging for valid law enforcement requests.
Apple for example does not have access to end-to-end iCloud data that is encrypted to my knowledge. They wouldn’t be able to provide the contents of my notes application to law enforcement necessarily - and that is currently legal.
I’m basing what I have said off of work I have done with attorneys in similar situations. I don’t know evidentiary law, but I wouldn’t want to be accused of destroying evidence of something. But my question stands. Why should someone who has doxed someone get away with it by deleting their account? How is that ethical?
So the key thing here is, “are you aware that the data is part of a legal proceeding or crime?”
If no, deleting it as part of normal operations is perfectly legal. There are plenty of VPNs which do not log user information, and will produce for the authorities all of the logs they retain (i.e. an empty log file).
From an ethical standpoint, keeping peoples’ data which they want removed, against their wishes, based on the hypothetical that at some point someone might do something wrong, is by far the less ethical route.
“You might do something bad, so I’m going to keep all your data whether you like it or not!” <- the bad thing
It’s cute how you think I’m going to take legal advice from you. You do you, have a nice evening.
Apple (and Google, Microsoft, etc) are checking signatures of all files on their services to detect illegal stuff. They do it for copyrighted content and they do it for CSAM.
Checking against a known-malicious hash is very different than claiming to have access to the plain data. In fact, even for the known-malicious hashes, the companies doing the checks usually don’t have access to the source data (so i.e. they don’t even necessarily know what it contains).
Wouldn’t Mastodon have the same legal requirements?