hello friends! new(ish)!

Home Server/Setting up your Storage: Difference between revisions

From InstallGentoo Wiki v2
Jump to navigation Jump to search
>DataHoarderAnon
m (Added ARC. Will expand on later)
>Comfyanon
No edit summary
Line 35: Line 35:


{{Tip|When setting up your drives in ZFS, instead of using the raw disks, make fixed size partitions on each, a little smaller than your drive's full capacity and use that instead.  
{{Tip|When setting up your drives in ZFS, instead of using the raw disks, make fixed size partitions on each, a little smaller than your drive's full capacity and use that instead.  
Not all disks have exactly the same sector count. Later down the line if you buy a new drive to replace a failed disk that's a different model or manufacturer, even if it's the correct capacity it may have less sectors and [http://www.freebsddiary.org/zfs-with-gpart.php ZFS will not let you use it.]}}  
Not all disks have exactly the same sector count. Later down the line if you buy a new drive to replace a failed disk that's a different model or manufacturer, even if it's the correct capacity it may have less sectors and [http://www.freebsddiary.org/zfs-with-gpart.php ZFS will not let you use it.]}}
 
==== Requirements ====
 
All those based features come with requirements. 
 
As mentioned above, it is *highly* recommended to use ECC ram with ZFS.  This means you should NOT use an SBC, consoomer computer, or shitty NAS like QNAP or synology (unless they are some crazy overpriced version with ECC ram).  This is a major limitation of ZFS, but you don’t necessarily need a 4U monster rack mount to get it done.  Workstation-grade xeons that can be had for cheap (used) on eBay with ECC ram that work great.  They look like a normal desktop tower and are silent.
 
Your drives need to be exposed to the operating system DIRECTLY, that means if you have a hardware raid card, it has to be set to “IT MODE” or “JBOD” (just a bunch of disks).  Not all raid cards support this, so research before you buy.  If de-duplication is very important to you, you will need a lot of ram - 1GB per TB is the rule of thumb tossed around a lot.  If you do not need de-duplication, like most people don’t, the ram requirements are reduced.


==== Snapshots and Clones ====
==== Snapshots and Clones ====
Line 42: Line 50:
==== Should I use ZFS? ====
==== Should I use ZFS? ====


ZFS has a lot of really great features that make a a superb file system. It has file system level checksums for data integrity, file self healing which can correct silent disk errors, incremental snapshots and rollback, file deduplication, encryption, and more.
ZFS has a lot of really great features that make a a superb file system. It has file system level checksums for data integrity, file self healing which can correct silent disk errors, copy on write, incremental snapshots and rollback, file deduplication, encryption, and more.


There are however, some downsides to ZFS. Notably inflexibility and the upfront cost. ZFS RAIDZ vdevs '''[https://louwrentius.com/the-hidden-cost-of-using-zfs-for-your-home-nas.html CANNOT BE EXPANDED]''' after being created. Parity cannot be added either (you cannot change a RAIDZ1 to a RAIDZ2 later on). You cannot use differently sized disks or disks with data already on them (even disks formatted as ZFS). In other words, you need to buy ''ALL'' of the drives you plan on using in your RAIDZ array '''at the same time''', because unlike other software RAID (or even hardware RAID), you won't be able to change it later. This inherently makes ZFS costly to use and thus unfriendly to more budget oriented server builds. Growing your storage is pricy too, as best practice is to add either add identical Vdevs to your existing Zpools, or create an entirely new Zpool. This means at a minimum you need to add two drives at a time to maintain proper redundancy (if using mirrored pairs). You can "Vertically" expand your Zpool by replacing each disk in a RaidZ array with a larger disk, but this requires resilvering the array each time and for larger arrays can take ''Weeks'' or even ''Months'' so it is not recommended. Now also add in the fact that running ZFS also requires a hefty amount of RAM, preferably ECC ram (which is expensive in and of itself), requires server hardware to utilize to it's fullest, and that some of the fancy features like dedup also require a good processor too..
There are however, some downsides to ZFS RAIDZ. Notably inflexibility and the upfront cost. ZFS RAIDZ vdevs '''[https://louwrentius.com/the-hidden-cost-of-using-zfs-for-your-home-nas.html CANNOT BE EXPANDED]''' after being created. Parity cannot be added either (you cannot change a RAIDZ1 to a RAIDZ2 later on). You cannot use differently sized disks or disks with data already on them (even disks formatted as ZFS). In other words, you need to buy ''ALL'' of the drives you plan on using in your RAIDZ array '''at the same time''', because unlike other software RAID (or even hardware RAID), you won't be able to change it later. This inherently requires you to pre-plan your expansion. It is best to budget your hard drive money and save for the major sales and buy multiple shuckable external hard drives at once. This means you get maximum TB/$ spent and all the benefits of ZFS RAIDZ.
 
The price tag starts to add up really quickly.  
   
   
So when asking yourself "Should I use ZFS?" you really should be asking "Do I really need ZFS?" (Do I want long term data integrity and all those other fancy features?) and "Can I afford ZFS?". If the answer to both of those questions is "Yes", then you can and should use ZFS, otherwise use something else like Snapraid or mdadm.
Note that the above limitations and pre-planning are only required if you want to have raid-like features.  It is perfectly fine to use ZFS on a single drive and using snapshots + ZFS send/receive makes block-level perfect backups to a second drive (that can be different sized) while using minimal space, protect from ransomeware, and are faster than any rsync or a cp copy.  Remember that ‘”[https://www.raidisnotabackup.com RAID in any form is NOT a backup]”’ - RAID provides uptime and availability in case of a drive failure.  So when asking yourself "Should I use ZFS RAIDZ?you really should be asking "Do I really need RAID” (Is uptime and availability in case of drive failure very important to me?) and "Can I budget/plan my purchases around buying multiple drives at once?. If the answer to both of those questions is "Yes", then you can and should use ZFS RAIDZ.  If no, but you still want uptime, use something else like Snapraid or mdadm.  If no, but you still want all the features of ZFS and meet the requirements listed above, then use ZFS on single drives.






'''Not Recommended:'''
'''Not Recommended:'''
*Running ZFS on old hardware.
*Running ZFS on ancient hardware.
*Running ZFS on ancient hardware.
*Running ZFS on consoomer motherboards.
*Running ZFS on consoomer motherboards.
Line 67: Line 72:
*Run ZFS in a VM without taking the proper [https://www.ixsystems.com/blog/yes-you-can-virtualize-freenas/ precautions.]
*Run ZFS in a VM without taking the proper [https://www.ixsystems.com/blog/yes-you-can-virtualize-freenas/ precautions.]
*Run ZFS with [https://www.ixsystems.com/blog/library/wd-red-smr-drive-compatibility-with-zfs/ SMR drives.]
*Run ZFS with [https://www.ixsystems.com/blog/library/wd-red-smr-drive-compatibility-with-zfs/ SMR drives.]
'''<span style="color: green;>DO</span>'''
*Run ZFS if you have ECC ram and a sandy-bridge or newer processor
*Run ZFS if you care about having the best and most robust set of features in any file system
*Use snapshots (see syncoid, snapoid, and other handy tools to schedule and manage snapshots)
*Use ZFS send/receive for backups
*Sleep soundly with a smile on your face knowing that you have the most based filesystem in the world





Revision as of 18:08, 23 December 2020

Setting up your RAID Solution

Note: Before Reading, if you do not have a grasp on basic RAID concepts you should read up on that first. There is terminology in this article you will not understand without first understanding RAID.

mdadm

ZFS

Note: There are a lot of misconceptions about ZFS and ECC Ram. ECC Ram is NOT required for ZFS to operate. ZFS was made to protect data against degradation however, and not using ECC Ram to protect against memory errors (and thus data degradation) defeats the purpose of ZFS.

Basic Concepts

Adaptive Replacement Cache (ARC)

Physical Disks are grouped into Virtual devices (Vdevs). Vdevs are grouped into Zpools. Datasets reside in Zpools.

The actual file system portion of ZFS is a dataset which sits on top of the ZPool. This is where you store all of your data. There are also Zvols which are the equivalent of block devices (or LVM LVs). You can format these with other file systems like XFS, or use them as block storage, but for the most part we will be using just the standard ZFS file system. There are also Snapshots and Clones which we will talk about later.

Zpools stripe data across all included vdevs.

There are 7 types of vdevs.

  1. Disk: A single storage device. Adding multiple drives to the same pool without RAIDZ or Mirror is effectively Raid 0.
  2. Mirror: Same as a RAID 1 Mirror. Adding multiple mirrored vdevs is effectively Raid 10.
  3. RAIDZ: Parity based RAID similar to RAID 5. RAIDZ1, RAIDZ2, RAIDZ3 with Single, double, and triple parity respectively.
  4. Hot Spare: A hot spare, or standby drive that will replace a failed disk until it is replaced with a new one.
  5. File: A pre-allocated file.
  6. Cache: A cache device (typically SSD) for L2ARC. It's generally not recommend to use this unless you absolutely need it.
  7. Log: Dedicated ZFS Intent Log (ZIL) device, also called a SLOG (Separate intent LOG SSD). Usually these are high performance, durable SLC or MLC SSDs


Tip: When setting up your drives in ZFS, instead of using the raw disks, make fixed size partitions on each, a little smaller than your drive's full capacity and use that instead. Not all disks have exactly the same sector count. Later down the line if you buy a new drive to replace a failed disk that's a different model or manufacturer, even if it's the correct capacity it may have less sectors and ZFS will not let you use it.

Requirements

All those based features come with requirements.

As mentioned above, it is *highly* recommended to use ECC ram with ZFS. This means you should NOT use an SBC, consoomer computer, or shitty NAS like QNAP or synology (unless they are some crazy overpriced version with ECC ram). This is a major limitation of ZFS, but you don’t necessarily need a 4U monster rack mount to get it done. Workstation-grade xeons that can be had for cheap (used) on eBay with ECC ram that work great. They look like a normal desktop tower and are silent.

Your drives need to be exposed to the operating system DIRECTLY, that means if you have a hardware raid card, it has to be set to “IT MODE” or “JBOD” (just a bunch of disks). Not all raid cards support this, so research before you buy. If de-duplication is very important to you, you will need a lot of ram - 1GB per TB is the rule of thumb tossed around a lot. If you do not need de-duplication, like most people don’t, the ram requirements are reduced.

Snapshots and Clones

Should I use ZFS?

ZFS has a lot of really great features that make a a superb file system. It has file system level checksums for data integrity, file self healing which can correct silent disk errors, copy on write, incremental snapshots and rollback, file deduplication, encryption, and more.

There are however, some downsides to ZFS RAIDZ. Notably inflexibility and the upfront cost. ZFS RAIDZ vdevs CANNOT BE EXPANDED after being created. Parity cannot be added either (you cannot change a RAIDZ1 to a RAIDZ2 later on). You cannot use differently sized disks or disks with data already on them (even disks formatted as ZFS). In other words, you need to buy ALL of the drives you plan on using in your RAIDZ array at the same time, because unlike other software RAID (or even hardware RAID), you won't be able to change it later. This inherently requires you to pre-plan your expansion. It is best to budget your hard drive money and save for the major sales and buy multiple shuckable external hard drives at once. This means you get maximum TB/$ spent and all the benefits of ZFS RAIDZ.

Note that the above limitations and pre-planning are only required if you want to have raid-like features. It is perfectly fine to use ZFS on a single drive and using snapshots + ZFS send/receive makes block-level perfect backups to a second drive (that can be different sized) while using minimal space, protect from ransomeware, and are faster than any rsync or a cp copy. Remember that ‘”RAID in any form is NOT a backup”’ - RAID provides uptime and availability in case of a drive failure. So when asking yourself "Should I use ZFS RAIDZ?” you really should be asking "Do I really need RAID” (Is uptime and availability in case of drive failure very important to me?) and "Can I budget/plan my purchases around buying multiple drives at once?”. If the answer to both of those questions is "Yes", then you can and should use ZFS RAIDZ. If no, but you still want uptime, use something else like Snapraid or mdadm. If no, but you still want all the features of ZFS and meet the requirements listed above, then use ZFS on single drives.


Not Recommended:

  • Running ZFS on ancient hardware.
  • Running ZFS on consoomer motherboards.
  • Run ZFS without ECC Ram. If you can afford ZFS you can afford to get ECC Ram. No excuses.
  • Run ZFS on underqualified hardware (shitty little NAS boxes, SBCs, etc).
  • Use "Mutt" pools (Zpools with differently sized vdevs).
  • Growing your Zpool by replacing disks. Backup your data elsewhere, create a new pool, and transfer the data to the new pool. Much faster. (You could theoretically use a USB drive dock provided your array is 5 disks or less).


DO NOT

  • Run ZFS on top of Hardware RAID.
  • Run ZFS on top of other soft RAID.
  • Run ZFS in a VM without taking the proper precautions.
  • Run ZFS with SMR drives.

DO

  • Run ZFS if you have ECC ram and a sandy-bridge or newer processor
  • Run ZFS if you care about having the best and most robust set of features in any file system
  • Use snapshots (see syncoid, snapoid, and other handy tools to schedule and manage snapshots)
  • Use ZFS send/receive for backups
  • Sleep soundly with a smile on your face knowing that you have the most based filesystem in the world


Btrfs

Snapraid

Hardware RAID

Warning: Using hardware RAID and software RAID at the same time is NOT recommended. If you wish to pool multiple hardware RAID arrays together into a logical volume use LVM.

If you bought an old used server with a RAID controller already installed, or perhaps you don't feel like messing with software RAID solutions, you have the option of using hardware RAID rather than software RAID.


Choosing a file system

XFS

ext4

NTFS

If you are using snapraid as your raid solution, using NTFS formatted drives is perfectly fine. With Snapraid you are usually pulling out random drives you have lying around, which are most likely to be NTFS formatted. Otherwise, we do not recommend using NTFS unless you are running a Windows server for some reason. It does not have the same level of support on Linux and UNIX based systems as ext4 and XFS.

unRAID does not support NTFS. If you are using unRAID you will need to use ext4 or XFS.


Distributed Filesystems

CEPH

seaweedfs

lizardfs

moosefs

External Links

See also