Admiral Patrick

I’m surprisingly level-headed for being a walking knot of anxiety.

Ask me anything.

Special skills include: Knowing all the “na na na nah nah nah na” parts of the Three’s Company theme.

I also develop Tesseract UI for Lemmy/Sublinks

Avatar by @SatyrSack@feddit.org

  • 145 Posts
  • 1.72K Comments
Joined 3 years ago
cake
Cake day: June 6th, 2023

help-circle

  • Thanks!

    Mostly there’s three steps involved:

    1. Setup Nepenthes to receive the traffic
    2. Perform bot detection on inbound requests (I use a regex list and one is provided below)
    3. Configure traffic rules in your load balancer / reverse proxy to send the detected bot traffic to Nepenthes instead of the actual backend for the service(s) you run.

    Here’s a rough guide I commented a while back: https://dubvee.org/comment/5198738

    Here’s the post link at lemmy.world which should have that comment visible: https://lemmy.world/post/40374746

    You’ll have to resolve my comment link on your instance since my instance is set to private now, but in case that doesn’t work, here’s the text of it:

    So, I set this up recently and agree with all of your points about the actual integration being glossed over.

    I already had bot detection setup in my Nginx config, so adding Nepenthes was just changing the behavior of that. Previously, I had just returned either 404 or 444 to those requests but now it redirects them to Nepenthes.

    Rather than trying to do rewrites and pretend the Nepenthes content is under my app’s URL namespace, I just do a redirect which the bot crawlers tend to follow just fine.

    There’s several parts to this to keep my config sane. Each of those are in include files.

    • An include file that looks at the user agent, compares it to a list of bot UA regexes, and sets a variable to either 0 or 1. By itself, that include file doesn’t do anything more than set that variable. This allows me to have it as a global config without having it apply to every virtual host.

    • An include file that performs the action if a variable is set to true. This has to be included in the server portion of each virtual host where I want the bot traffic to go to Nepenthes. If this isn’t included in a virtual host’s server block, then bot traffic is allowed.

    • A virtual host where the Nepenthes content is presented. I run a subdomain (content.mydomain.xyz). You could also do this as a path off of your protected domain, but this works for me and keeps my already complex config from getting any worse. Plus, it was easier to integrate into my existing bot config. Had I not already had that, I would have run it off of a path (and may go back and do that when I have time to mess with it again).

    The map-bot-user-agents.conf is included in the http section of Nginx and applies to all virtual hosts. You can either include this in the main nginx.conf or at the top (above the server section) in your individual virtual host config file(s).

    The deny-disallowed.conf is included individually in each virtual hosts’s server section. Even though the bot detection is global, if the virtual host’s server section does not include the action file, then nothing is done.

    Files

    map-bot-user-agents.conf

    Note that I’m treating Google’s crawler the same as an AI bot because…well, it is. They’re abusing their search position by double-dipping on the crawler so you can’t opt out of being crawled for AI training without also preventing it from crawling you for search engine indexing. Depending on your needs, you may need to comment that out. I’ve also commented out the Python requests user agent. And forgive the mess at the bottom of the file. I inherited the seed list of user agents and haven’t cleaned up that massive regex one-liner.

    # Map bot user agents
    ## Sets the $ua_disallowed variable to 0 or 1 depending on the user agent. Non-bot UAs are 0, bots are 1
    
    map $http_user_agent $ua_disallowed {
        default 		0;
        "~PerplexityBot"	1;
        "~PetalBot"		1;
        "~applebot"		1;
        "~compatible; zot"	1;
        "~Meta"		1;
        "~SurdotlyBot"	1;
        "~zgrab"		1;
        "~OAI-SearchBot"	1;
        "~Protopage"	1;
        "~Google-Test"	1;
        "~BacklinksExtendedBot" 1;
        "~microsoft-for-startups" 1;
        "~CCBot"		1;
        "~ClaudeBot"	1;
        "~VelenPublicWebCrawler"	1;
        "~WellKnownBot"	1;
        #"~python-requests"	1;
        "~bitdiscovery"	1;
        "~bingbot"		1;
        "~SemrushBot" 	1;
        "~Bytespider" 	1;
        "~AhrefsBot" 	1;
        "~AwarioBot"	1;
    #    "~Poduptime" 	1;
        "~GPTBot" 		1;
        "~DotBot"	 	1;
        "~ImagesiftBot"	1;
        "~Amazonbot"	1;
        "~GuzzleHttp" 	1;
        "~DataForSeoBot" 	1;
        "~StractBot"	1;
        "~Googlebot"	1;
        "~Barkrowler"	1;
        "~SeznamBot"	1;
        "~FriendlyCrawler"	1;
        "~facebookexternalhit" 1;
        "~*(?i)(80legs|360Spider|Aboundex|Abonti|Acunetix|^AIBOT|^Alexibot|Alligator|AllSubmitter|Apexoo|^asterias|^attach|^BackDoorBot|^BackStreet|^BackWeb|Badass|Bandit|Baid|Baiduspider|^BatchFTP|^Bigfoot|^Black.Hole|^BlackWidow|BlackWidow|^BlowFish|Blow|^BotALot|Buddy|^BuiltBotTough|
    ^Bullseye|^BunnySlippers|BBBike|^Cegbfeieh|^CheeseBot|^CherryPicker|^ChinaClaw|^Cogentbot|CPython|Collector|cognitiveseo|Copier|^CopyRightCheck|^cosmos|^Crescent|CSHttp|^Custo|^Demon|^Devil|^DISCo|^DIIbot|discobot|^DittoSpyder|Download.Demon|Download.Devil|Download.Wonder|^dragonfl
    y|^Drip|^eCatch|^EasyDL|^ebingbong|^EirGrabber|^EmailCollector|^EmailSiphon|^EmailWolf|^EroCrawler|^Exabot|^Express|Extractor|^EyeNetIE|FHscan|^FHscan|^flunky|^Foobot|^FrontPage|GalaxyBot|^gotit|Grabber|^GrabNet|^Grafula|^Harvest|^HEADMasterSEO|^hloader|^HMView|^HTTrack|httrack|HTT
    rack|htmlparser|^humanlinks|^IlseBot|Image.Stripper|Image.Sucker|imagefetch|^InfoNaviRobot|^InfoTekies|^Intelliseek|^InterGET|^Iria|^Jakarta|^JennyBot|^JetCar|JikeSpider|^JOC|^JustView|^Jyxobot|^Kenjin.Spider|^Keyword.Density|libwww|^larbin|LeechFTP|LeechGet|^LexiBot|^lftp|^libWeb|
    ^likse|^LinkextractorPro|^LinkScan|^LNSpiderguy|^LinkWalker|msnbot|MSIECrawler|MJ12bot|MegaIndex|^Magnet|^Mag-Net|^MarkWatch|Mass.Downloader|masscan|^Mata.Hari|^Memo|^MIIxpc|^NAMEPROTECT|^Navroad|^NearSite|^NetAnts|^Netcraft|^NetMechanic|^NetSpider|^NetZIP|^NextGenSearchBot|^NICErs
    PRO|^niki-bot|^NimbleCrawler|^Nimbostratus-Bot|^Ninja|^Nmap|nmap|^NPbot|Offline.Explorer|Offline.Navigator|OpenLinkProfiler|^Octopus|^Openfind|^OutfoxBot|Pixray|probethenet|proximic|^PageGrabber|^pavuk|^pcBrowser|^Pockey|^ProPowerBot|^ProWebWalker|^psbot|^Pump|python-requests\/|^Qu
    eryN.Metasearch|^RealDownload|Reaper|^Reaper|^Ripper|Ripper|Recorder|^ReGet|^RepoMonkey|^RMA|scanbot|SEOkicks-Robot|seoscanners|^Stripper|^Sucker|Siphon|Siteimprove|^SiteSnagger|SiteSucker|^SlySearch|^SmartDownload|^Snake|^Snapbot|^Snoopy|Sosospider|^sogou|spbot|^SpaceBison|^spanne
    r|^SpankBot|Spinn4r|^Sqworm|Sqworm|Stripper|Sucker|^SuperBot|SuperHTTP|^SuperHTTP|^Surfbot|^suzuran|^Szukacz|^tAkeOut|^Teleport|^Telesoft|^TurnitinBot|^The.Intraformant|^TheNomad|^TightTwatBot|^Titan|^True_Robot|^turingos|^TurnitinBot|^URLy.Warning|^Vacuum|^VCI|VidibleScraper|^Void
    EYE|^WebAuto|^WebBandit|^WebCopier|^WebEnhancer|^WebFetch|^Web.Image.Collector|^WebLeacher|^WebmasterWorldForumBot|WebPix|^WebReaper|^WebSauger|Website.eXtractor|^Webster|WebShag|^WebStripper|WebSucker|^WebWhacker|^WebZIP|Whack|Whacker|^Widow|Widow|WinHTTrack|^WISENutbot|WWWOFFLE|^
    WWWOFFLE|^WWW-Collector-E|^Xaldon|^Xenu|^Zade|^Zeus|ZmEu|^Zyborg|SemrushBot|^WebFuck|^MJ12bot|^majestic12|^WallpapersHD)" 1;
    
    }
    
    
    deny-disallowed.conf
    # Deny disallowed user agents
    if ($ua_disallowed) { 
        # This redirects them to the Nepenthes domain. So far, pretty much all the bot crawlers have been happy to accept the redirect and crawl the tarpit continuously 
    	return 301 https://content.mydomain.xyz/;
    }
    


  • Most of the requirements are going to be for the database, and that depends on:

    1. How many active users you expect
    2. How many large rooms you or your users join

    I left many of the large Matrix spaces I was in, and mine is now mostly just 1:1 chats or a group chat with a handful of friends. Given that low-usage case, I can run my server on a Pi 3 with 4 GB of RAM quite comfortably. I don’t do that in practice, but I do have that setup as a backup server - it periodically syncs the database from my main server - and works fine. The bottleneck there, really, is the SD card storage since I didn’t want an external SSD hanging off of it.

    Even when I was active in several large Matrix spaces/rooms, a USFF Optiplex with a quad core i5, 8 GB of RAM, and a 500GB SSD was more than enough to run it comfortably alongside some other services like LibreTranslate.




  • Ugh. Thanks. It’s quite possible, though maybe just a regional one? I did inadvertently block one of the IPs Let’s Encrypt uses for secondary validation, so this may be another case of that.

    I get a shitload of bad traffic from the southeast Asia area (mostly Philippines/Singapore AWS) and have taken to blanket blocking their whole routes rather than constantly playing whack-a-mole. Fail2ban only goes so far for case-by-case.

    Here’s the image from the meme from an alternate source:




  • Atkinson Hyperlegible is my new jam. I’m dyslexic and it helps tremendously even though that’s not its primary goal. It also looks a lot better than OpenDyslexic which I used to use.

    Loaded “Hyperlegible” onto my Kobo, the reader app on my phone, and set it as the default font on my desktop environment.

    Also added it as an option in Tesseract UI (which I swear I’ll be releasing “soon”).



  • Basically the only thing you want to present with a challenge is the paths/virtual hosts for the web frontends.

    Anything /api/v3/ is client-to-server API (i.e. how your client talk to your instance) and needs to be obstruction-free. Otherwise, clients/apps won’t be able to use the API. Same for /pictrs since that proxies through Lemmy and is a de-facto API endpoint (even though it’s a separate component).

    Federation traffic also needs to be exempt, but it’s not based on routes but by the HTTP Accept request header and request method.

    Looking at the Nginx proxy config, there’s this mapping which tells Nginx how to route inbound requests:

    nginx_internal.conf: https://raw.githubusercontent.com/LemmyNet/lemmy-ansible/main/templates/nginx_internal.conf

        map "$request_method:$http_accept" $proxpass {
            # If no explicit matches exists below, send traffic to lemmy-ui
            default "http://lemmy-ui:1234/";
    
            # GET/HEAD requests that accepts ActivityPub or Linked Data JSON should go to lemmy.
            #
            # These requests are used by Mastodon and other fediverse instances to look up profile information,
            # discover site information and so on.
            "~^(?:GET|HEAD):.*?application\/(?:activity|ld)\+json" "http://lemmy:8536/";
    
            # All non-GET/HEAD requests should go to lemmy
            #
            # Rather than calling out POST, PUT, DELETE, PATCH, CONNECT and all the verbs manually
            # we simply negate the GET|HEAD pattern from above and accept all possibly $http_accept values
            "~^(?!(GET|HEAD)).*:" "http://lemmy:8536/";
    



  • Admiral Patrick@dubvee.orgtoLemmy Shitpost@lemmy.worldKnow when to stop
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    1
    ·
    edit-2
    15 days ago

    It’s not really about the karma-farming aspect of things or pleasing a crowd. It’s about, in general, making a statement that’s otherwise agreeable and then, often pointlessly, following it up with something backhanded, needlessly obnoxious, mean spirited, racist/xenophobic, or otherwise “not good”. You know, basic tactfulness.

    A fairly tame example:

    "It was such a fun evening!  I really hope we can do it again sometime!  Even though your house smelled like cat piss".



  • I also run (well, ran) a local registry. It ended up being more trouble than it was worth.

    Would you have to docker load them all when rebuilding a host?

    Only if you want to ensure you bring the replacement stack back up with the exact same version of everything or need to bring it up while you’re offline. I’m bad about using the :latest tag so this is my way of version-controlling. I’ve had things break (cough Authelia cough) when I moved it to another server and it pulled a newer image that had breaking config changes.

    For me, it’s about having everything I need on hand in order to quickly move a service or restore it from a backup. It also depends on what your needs are and the challenges you are trying to overcome. i.e. When I started doing this style of deployment, I had slow, unreliable, ad heavily data-capped internet. Even if my connection was up, pulling a bunch of images was time consuming and ate away at my measly satellite internet data cap. Having the ability to rebuild stuff offline was a hard requirement when I started doing things this way. That’s now no longer a limitation, but I like the way this works so have stuck with it.

    Everything a service (or stack of services) needs is all in my deploy directory which looks like this:

    /apps/{app_name}/
        docker-compose.yml
        .env
        build/
            Dockerfile
            {build assets}
        data/
            {app_name}
            {app2_name}  # If there are multiple applications in the stack
            ...
        conf/                   # If separate from the app data
            {app_name}
            {app2_name}
            ...
        images/
            {app_name}-{tag}-{arch}.tar.gz
            {app2_name}-{tag}-{arch}.tar.gz
    

    When I run backups, I tar.gz the whole base {app_name} folder which includes the deploy file, data, config, and dumps of its services images and pipe that over SSH to my backup server (rsync also works for this). The only ones I do differently are ones with in-stack databases that need a consistent snapshot.

    When I pull new images to update the stack, I move the old images and docker save the now current ones. The old images get deleted after the update is considered successful (so usually within 3-5 days).

    A local registry would work, but you would have to re-tag all of the pre-made images to your registry (e.g. docker tag library/nginx docker.example.com/nginx) in order to push them to it. That makes updates more involved and was a frequent cause of me running 2+ year old versions of some images.

    Plus, you’d need the registry server and any infrastructure it needs such as DNS, file server, reverse proxy, etc before you could bootstrap anything else. Or if you’re deploying your stack to a different environment outside your own, then your registry server might not be available.

    Bottom line is I am a big fan of using Docker to make my complex stacks easy to port around, backup, and restore. There’s many ways to do that, but this is what works best for me.




  • Yep. I’ve got a bunch of apps that work offline, so I back up the currently deployed version of the image in case of hardware or other failure that requires me to re-deploy it. I also have quite a few custom-built images that take a while to build, so having a backup of the built image is convenient.

    I structure my Docker-based apps into dedicated folders with all of their config and data directories inside a main container directory so everything is kept together. I also make an images directory which holds backup dumps of the images for the stack.

    • Backup: docker save {image}:{tag} | gzip -9 > ./images/{image}-{tag}-{arch}.tar.gz
    • Restore docker load < ./images/{image}-{tag}-{arch}.tar.gz

    It will backup/restore with the image and tag used during the save step. The load step will accept a gzipped tar so you don’t even need to decompress it first. My older stuff doesn’t have the architecture in the filename but I’ve started adding that lately now that I have a mix of amd64 and arm64.