Sorry for the long post. tl;dr: I’ve already got a small home server and need more storage. Do I replace an existing server with one that has more hard drive bays, or do I get a separate NAS device?


I’ve got some storage VPSes “in the cloud”:

  • 10TB disk / 2GB RAM with HostHatch in LA
  • 100GB NVMe / 16GB RAM with HostHatch in LA
  • 3.5TB disk / 2GB RAM with Servarica in Canada

The 10TB VPS has various files on it - offsite storage of alert clips from my cameras, photos, music (which I use with Plex on the NVMe VPS via NFS), other miscellaneous files (using Seafile), backups from all my other VPSes, etc. The 3.5TB one is for a backup of the most important files from that.

The issue I have with the VPSes is that since they’re shared servers, there’s limits in terms of how much CPU I can use. For example, I want to run PhotoStructure for all my photos, but it needs to analyze all the files initially. I limit Plex to maximum 50% of one CPU, but limiting things like PhotoStructure would make them way slower.

I’ve had these for a few years. I got them when I had an apartment with no space for a NAS, expensive power, and unreliable Comcast internet. Times change… Now I’ve got a house with space for home servers, solar panels so running a server is “free”, and 10Gbps symmetric internet thanks to a local ISP, Sonic.

Currently, at home I’ve got one server: A HP ProDesk SFF PC with a Core i5-9500, 32GB RAM, 1TB NVMe, and a single 14TB WD Purple Pro drive. It records my security cameras (using Blue Iris) and runs home automation stuff (Home Assistant, etc). It pulls around 41 watts with its regular load: 3 VMs, ~12% CPU usage, constant ~34Mbps traffic from the security cameras, all being written to disk.

So, I want to move a lot of these files from the 10TB VPS into my house. 10TB is a good amount of space for me, maybe in RAID5 or whatever is recommended instead these days. I’d keep the 10TB VPS for offsite backups and camera alerts, and cancel the other two.

Trying to work out the best approach:

  1. Buy a NAS. Something like a QNAP TS-464 or Synology DS923+. Ideally 10GbE since my network and internet connection are both 10Gbps.
  2. Replace my current server with a bigger one. I’m happy with my current one; all I really need is something with more hard drive bays. The SFF PC only has a single drive bay, its motherboard only has a single 6Gbps SATA port, and the only PCIe slots are taken by a 10Gbps network adapter and a Google Coral TPU.
  3. Build a NAS PC and use it alongside my current server. TrueNAS seems interesting now that they have a Linux version (TrueNAS Scale). Unraid looks nice too.

Any thoughts? I’m leaning towards option 2 since it’ll use less space and power compared to having two separate systems, but maybe I should keep security camera stuff separate? Not sure.

  • Greyscale@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    2
    ·
    1 year ago

    Don’t buy a synology. For less money you can make a better system. I use a cheap itx board, a used 6600k, Silverstone DS380 and 8x4TB disks of spinning rust and a 256G NVME as my current iteration of my NAS. its basically silent, and runs ubuntu + zfs + shit in containers. Its excellent.

    I am however considering 10G ethernet cards for it and my desktop and just doing point-to-point. Not that 1G is too slow for my needs, but because it’d be fun.

    • danOPA
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Thanks for the input. Would you recommend having a separate NAS system, or replacing my current server with it?

      • Greyscale@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        1 year ago

        I’d consolidate to let it pay for itself over the longer term in electricity savings.

        My single NAS runs everything I could ever want, though I regret not finding a used 6700k, finding out teh 6600k didn’t have HT.

        Also, I run frigate on it inside a container and use a Google Coral Accellerator to people-detection from 4x2k camera streams. Its pretty swish, though it took some fiddling to get the kernel to be groovy with it and do container-device passthru from PCI-e.

        In total, my single NAS runs the following in containers:

        • Personal projects
        • Home Assistant
        • MQTT for Tasmota
        • Game servers
        • Deluge for yarr harr fiddly dee
        • Frigate NVR

        The whole shebang, NAS with permanently spinning rust, UPS, ISP Modem and Ubiquity Dream Machine run ~100W.

        Edit: I’ve noticed ZFS is twitchier than most about disks failing. It fails disks about once or twice a year, which are getting cheaper every year. Most of the time the disk still works as far as SMART is concerned, but I’m not gonna question the ZFS gods.

        • danOPA
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 year ago

          Are you running something like Unraid or TrueNAS, or are you just running a ‘regular’ Linux distro?

          Also, I run frigate on it inside a container and use a Google Coral Accellerator to people-detection from 4x2k camera streams.

          I’m doing something similar, except using Blue Iris and CodeProject.AI instead of Frigate. Works pretty well! CodeProject AI just added Coral support recently.

          The whole shebang, NAS with permanently spinning rust, UPS, ISP Modem and Ubiquity Dream Machine run ~100W.

          How much power does just the NAS use?

          • Greyscale@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            How much power does just the NAS use?

            the NAS is the bulk of the 100W.

            Are you running something like Unraid or TrueNAS, or are you just running a ‘regular’ Linux distro?

            Ubuntu + ZFS. I don’t see the appeal of running a non-mainline distribution. All I did was set it up so ZFS sends me emails and a crontab to run a ZFS resilver weekly.

      • DontTakeMySky@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Personally I like to keep my data on a separate system because it helps me keep it stable and secure compared to my more “fun” servers.

        That said, being able to run compute on the same server as storage removes a bit of hassle.

    • danOPA
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Do you use ECC RAM? The Synology comes with ECC RAM, whereas it’s hard to find consumer motherboards that support ECC :/

      • Greyscale@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        1 year ago

        Another reason to avoid a Synology. I had a HP Microserver gen 8 that I ditched due to CPU constraints and ECC ram. Just got 32G of cheap DDR4.

      • TheHolm@aussie.zone
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I use QNAPs for literary decades. I’m now in my 3d one. I love that they supports their devices for long time. But their software is getting more features, but quality IMHO is going down. I would now build NAS myself and not buy QNAP. Not having option with ECC RAM is also disappointing, but probably ok for home usage.

      • Greyscale@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        Can you just put stock ubuntu on it? Is the CPU worth a damn?

        If it can’t do either of those, it is manufactured ewaste, imo.

    • thomcat@midwest.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      What’s your power utilization with the 6600k? I have a spare one of those lying around and would convert my Ryzen 3950X AIO to just a server w/ a 6600k NAS if it doesn’t cost you too much.

      • Greyscale@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        It and some other network appliance bits draw ~ 100W continuous.

        I think a good chunk of that is the disks, but I could be wrong.

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    MQTT Message Queue Telemetry Transport point-to-point networking
    NAS Network-Attached Storage
    NVR Network Video Recorder (generally for CCTV)
    PCIe Peripheral Component Interconnect Express
    SATA Serial AT Attachment interface for mass storage

    5 acronyms in this thread; the most compressed thread commented on today has 8 acronyms.

    [Thread #39 for this sub, first seen 14th Aug 2023, 00:15] [FAQ] [Full list] [Contact] [Source code]

  • Brkdncr@kbin.social
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    I manage storage systems as part of my day job. i think you would be happy with a simple direct attached storage device. You’d need a storage controller card and a storage controller. These are usually enterprise-grade items so they might be expensive. I suspect there are SATA options but SATA is pretty slow.

    QNAP and Synology are decent for what they offer, if you like the idea if turning it on, setting up an account, and then having access to both native and an easy 3rd-party store with no fiddling needed then they are a good idea. You can also setup an iSCSI connection for direct-attached storage over the network.

    • danOPA
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      What do you think about Direct Attached Storage devices that use USB? My small form factor PC has 10Gbps USB 3.1 Gen 2 ports, which should in theory be enough bandwidth for 3 or 4 disks.

      I don’t know if I trust USB for this though, and enterprise-grade equipment is probably too expensive for me…

      • Greyscale@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        relatedly, I used to use a 4x bay USB3 caddy for some disks… It was OK, but didn’t expose the raw disks and the controller was pretty fucky swallowing things like SMART.

  • You999@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    I’d say replace your current server with a larger one that can pull double duty as a NAS and VM host. There are plenty of SFF nas cases on the market now.

    • danOPA
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 year ago

      Do you have some examples of good SFF NAS cases? I need two PCIe sockets (one for 10 Gigabit Ethernet and one for a Google Coral TPU) whereas a lot of the SFF cases only have one. I might need to just use a larger case instead. I do like the smaller sized cases, but it’d be on a shelf in a closet so the size doesn’t matter too much.

      • You999@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        The classic choice would be fractal design’s node 304 which fits six 3.5" drive in an ITX form factor. There’s also Silverstone’s CS381 which while being larger can fit eight hotswappable 3.5" drives and a micro atx motherboard.

        Even if you go ITX you don’t have to feel limited by the lack of PCIe slots. Since m.2 uses the PCIe protocol it’s very easy to adapt it to your needs such as to an additional PCIe 4x slot. There are even m.2 10GBe cards in both intel and Realtek flavors.

        Side question, what coral TPU do you have because it was my understanding that they use m.2, mini pcie, or USB and not the full size pcie slot?

        • danOPA
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          The classic choice would be fractal design’s node 304

          Thanks. I do like Fractal Design - I’ve been using a Define R4 case for my desktop PC for 10 years now.

          Side question, what coral TPU do you have because it was my understanding that they use m.2, mini pcie, or USB and not the full size pcie slot?

          I’ve got one of the M.2 E-key dual-edge TPUs (two TPUs on one device), plus one of these PCIe adapters: https://www.makerfabs.com/dual-edge-tpu-adapter.html. Unfortunately the E-key slot on my SFF PC’s motherboard is CNVi only (no PCIe), and even for motherboards that have a PCIe M.2 E-key slot, a lot of motherboards only expose one of the TPUs since they don’t fully implement the M.2 spec. The PCIe adapter exposes both as separate PCIe devices and they both work great.

  • randombullet@feddit.de
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    I have the following and I love it as my setup

    Ryzen 5700G

    2 x 20tb disks in ZFS mirror

    128GB of ram

    250gb nvme boot drive

    1tb nvme VM drive

    1 x 2.5gbe

    1 x 1gbe

    Runs 6 VMs to include EVE-NG.

    Love the hell out of this and never hit any real bottlenecks. Best thing is, I own everything so service providers can’t drop me.

    My upload speeds are limited to 50mbps which sucks, but all in all it’s still workable.

    • danOPA
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Wow that’s a lot of RAM… Do you use it all?

    • danOPA
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      2 x 20tb disks in ZFS mirror

      What brand of drives are you using? Are they two different brands, or both the same brand?