
I can only get zfs shares to work if I reset the sharenfs property of one of the shares and then re-run zfs share -a. Even weirder, is that zfs share -a still doesn't when I manually run it. As you can see in my log below, although I have set zfs-share to run zfs share -a during boot, the shares are still not exported. I have set up ZFS and NFS using ZFS shares as described in. Using TCPtrack, I can then monitor my 10GB bond0 interface to see the transfer rates.I am setting up ZFS and NFS on Ubuntu Server 16.04LTS and have a weird issue that is causing me to go crazy. You should now then be able to mount your NFS mount via ESXi and write to it! Now for one minor performance tune! zfs set sync=disabled nfspool/lun000 While your at it, go ahead and copy that same command into /etc/rc.local Now lets start the ZFS-NFS share using: zfs share -a Lets now share the ZFS filesystem using NFS (built-in to the filesystem!!!!) zfs set sharenfs='rw=10.1.0.0/24,rw=10.0.0.0/24,all_squash' nfspool/lun000 We then set the filesystem permissions for NFS: chmod 777 /nfspool Then, we go ahead and create a filesystem on top of the array using: zfs create nfspool/lun000 Here we create the array named nfspool in raidz format, using devices sda-sdl zpool create nfspool raidz sdd sdc sdb sde sdf sdg sdi sdk sdj sdh ddl It is very popular for storage archives where the data is written once and accessed infrequently. This causes slowdowns when doing random reads of small chunks of data. The drawback is that when reading the checksum data, you are limited to basically the speed of one drive since the checksum data is spread across all drives in the zvol.
#OPENZFS SHARENFS INSTALL#
sudo apt-get install -y nfs-kernel-server Now share storage pool via NFS. It is very similar to RAID5, but without the write-hole penalty that RAID5 encounters. No need to edit /etc/exports and run exportfs. RAIDZ is very popular among many users because it gives you the best tradeoff of hardware failure protection vs useable storage. So now that I have the device ID’s, lets create the array using the ZFS RAIDZ raid-type.

Gives me the list of the 3TB drives that I’m going to use for the ZFS array, the output looks like: Disk /dev/sdc: 3000.6 GB, 3000559427584 bytes Running the command: fdisk -l | grep 3000.6

Now I want to create my ZFS array, to do that I need to find the device ID’s of my hard drives. The output should show the loaded ZFS modules as below: Now that you’ve installed the ZFS driver, lets make sure it loaded appropriately with the following command: lsmod | grep zfs The next step is to install the ZFS module (drivers) with the following command: modprobe zfs The next step after updating the system (yum -y update) is to install ZFS, I followed the instructions here: yum localinstall -nogpgcheck The hardware I used was a HP D元70 G6 with 11 3TB disks to be used for ZFS. In this case, I installed ZFS on CentOS 6.4 available here:
#OPENZFS SHARENFS SOFTWARE#
Cost, ability to use commodity hardware with free software.Using ZFS on Linux is an attractive solution for a high-performance NFS server due to several key factors:
