Can you expand on how you got blocked? First time I’ve heard of this.
Y tho?
Restic is awesome and has been rock solid for me for a few years now. Good choice.
This is the correct answer.
I run several containers that offer up http/s and they obviously can’t all use 80/443. Just adjust the left side of that port setting and you’re good.
That plus a reverse proxy for offering these services up over the public internet, if you choose to do so, is a killer pair.
Of course I can’t speak for him but I think he’d rather Reddit change course.
Apollo was polished. It can be good and look good at the same time.
I’m using PRAW/python and app credentials I made just for me and PRAW seems to have some good rate limit logic built in.
I also tried Power Delete Suite which seemed to work very quickly and that caused me to worry that I was running afoul of rate limits. My own python script utilizing PRAW works much more slowly but IMO that’s a good thing.
I’m hoping that once I have a nice list of comment ids I can hit them all via my script/PRAW, however long it takes.
I think people may need to wait. Here’s what I’ve read and seen myself so far:
You can only edit/delete so many comments as it seems reddit only indexes the last 1000. After editing/deleting everything you can, you can see you still have unedited/undeleted comments by searching your username like: site:reddit.com "usernameHere"
. I saw plenty of comments going back years (I have a 14yr old account) that I wasn’t able to touch.
The strategy seems to be to be requesting your data from reddit and then use the comment ids contained in that export to target them for edits/deletes via the API, assuming it’s still usable for small scripts like the ones we want to use.
We’re tracking our requested/received dates in this thread if you’re interested in adding yours to the list.
Keeping the account around for now for the same reason.
I’ve been wondering if Reddit has been fucking with stuff just since the API stuff started. This is a data point indicating yes.
Good point.
Huh… I would have thought they’d use the API when available but I honestly know nothing about it. Wouldn’t gathering data via API provide more structured data thereby making it easier to feed into their models?
The ongoing strike, spurred by Huffman’s plan to charge fees to third-party apps that serve up Reddit content, was supposed to last for 48 hours.
Not just charge fees… Exorbitant fees. Outrageous fees.
If Huffman wanted to target these much higher costs to LLMs, they could have instituted an approval process for 3PAs which got charged sane API fees while they charge much more for LLMs. I’m no dev but I think they could tell the difference between the two by just analyzing the API traffic.
But they aren’t doing that. Maybe LLMs were the primary target but they sure aren’t even trying to keep 3PAs around.
Submitted: 2023-06-18
Received: 2023-07-07
(I suggest others adding their data use ISO 8601 formatting)
Will that affect even small-time users like us who hardly ever use the API? That would kill things like the conversionbot, remindme, etc too.
I think I may be running into the index thing. I can easily find old, unedited comments of mine by using site:reddit.com "username"
.
My next move is to request my data from reddit which, as I understand it, should contain a list of comments in .json. I then plan on iterating through those and use PRAW to edit all of my comments going back 14yrs. Then I’ll delete my account.
You may be right. Before this, I had no real reason to check that far back. Those fuckers.
Before this API fiasco I could see everything with no time limit.
This came to my attention recently via someone I follow on Mastodon. I haven’t set time aside yet to set it up and try it out but since I heard about ChatGPT, etc, I thought this would be an excellent use of the tech.