FreeNAS is a storage appliance based on FreeBSD and is designed to make it easy to build and manage vast swathes of data across a number of standard protocols such as iSCSI and NFS. Disks are managed using ZFS, which is a powerful filesystem capable of providing software raid, deduplication, compression and other capabilities that make it attractive to use cases where reliability and resiliance are important. I've seen videos on youtube of people literally unplugging sticks of ram while the box has been taking writes, and it still hasn't gone pop. It's a solid bit of kit. Thats not to imply that it isn't capable of decent performance, either. ZFS is capable of some seriously high speeds so long as you understand your workload and design for your use case, sizing disks and speccing vdevs correctly.
FreeNAS a natural choice for my use case, where the storage box is going to run a workload almost exclusively of virtual machines on top of oVirt, the community version of Red Hat's Enterprise Virtualisation product, and I'm aiming for as high performance as possible, even if that means losing a bit of capacity. I do have some other use cases where I want to use NFS for infrequently accessed data, or possibly backups from other systems. Because of these requirements, I've opted to create 2 pools.
The first, "gold pool" will be the fast pool I will use to back my VM's in oVirt. It'll be formed of 20 WD Blue SSD's - nothing fancy, these are just consumer grade SSD's but I found them to be the best balance between cost, performance and longevity without having to invest in something with a ridiculous amount of PCIe channels for NVMe drives. The SSD's will be arranged into 10 vdevs of mirrored pairs. I chose this because from the research I've done, the best write performance profile seems to be in this configuration. Reads can be serviced by 2 disks (1 vdev), and writes by 10 vdevs. If I had a focus on capacity over performance then I would investigate something like a bunch of raidz2 pools.
My second pool "bronze pool" will be a raidz pool formed of 5x 5TB Seagate Barracuda 2.5inch disks. They were the biggest 2.5inch disks I could get hold of at the time, and I ended up filling the last 5 spots in my 25bay storage server with them. In hindsight, these drives may not be the best choice for this usecase, I wish I would have opted just to add on a 3.5inch disk shelf and gotten hold of some 10TB disks instead that may be more suited to this type of storage.
8 or 10TB is the point where you start needing to think about disk failure rates, as if you have a failure in your raidz pool, it is a statistical risk that when you are going through your resliver you could experience a second failure due to the intensity of the operation, likelyhood of more failures assuming all the disks are around the same age and type. In that case raidz2 is a much safer option, although the penalty for safety is losing some more usable disk space. I'm not going to pretend I know too much about the reasons why that is a statistical risk - I read it somewhere on the internet, and it seemed logical.
I have an NVMe in this box too that I'm not using yet. I will take a look at my performance without it first. I think that the gold pool should be happy, if anything it will be the bronze pool that needs the extra help.
On the network side of the fence I have 2x 10Gbe ports and I'm not sure how I feel about bonding them with iscsi traffic, so I have opted to put each one on its own vlan and subnet that will be presented to a non-routable storage network. Oh and I've decided to use jumboframes.. we'll see if that comes back to bite me :)
Anyway, I hope this is interesting - please let me know!