Well ZFS is awesome but a bit tricky and overly complicated in some manners.
A drive formatted as ZFS usually is called a pool. The pool can consist of one or more disks (RAID).
The command to handle the pool (the disk/s) is called "zpool" and is part of the zfs package.
To be able to use a pool and have access to it one has to import it first. If you use a pool on a new system/server/root you will have to issue "sudo zpool import" with no option to
detect newly available ZFS drives/pools.
After detecting the drive, it is then cached and can get imported with "sudo zpool import xxxpoolnamexxx" (or just -a). Most distros have some sort of service that issues this command on boot. It is
not necessary neither recommended to do "sudo zpool export xxxpoolnamexxx" (or just -a) on shutdown. This will completely make the pool unavailable to the system and is supposed to be used
only when preparing the disk for its physical installation in another system/server. After exporting a pool it will have to be detected again.
Inside of a pool there will be datasets (formerly known as partitions). These datasets can have a ton of ZFS unique attributes that do a shitload of stuff...one of those is the datasets mount point attribute. The datasets and their attributes are handled with the command "zfs".
If a given dataset has a mount point attribute set to a path then it will
automatically get mounted to that path on creation and every time when importing the pool and even create the directory if it didn't exist yet!
Oracle Solaris ZFS Administration Guide wrote
When you change the mountpoint property from legacy or none to a specific path, ZFS automatically mounts the file system.
This is part of the ZFS magic. No fstab, no mount commands needed. I actually don't understand the reasoning behind the systemd mount service for this reason. But I think I remember to always have to enable the import AND mount service on systemd based distros. Also ZFS is a deep topic and there's a reason to it for sure. One probably being that ZFS allows for a massive variety of different setups which calls for lots of different situations.
For my simple uses on Obarun a short "sudo zpool import -a" has worked fine in the past to make one single zfs disk available and mount all datasets to their default mount point attributes without the need to use "zfs mount". Usually it is advised to use the latter only for manual mounting.
Now to the 66 service frontend file. It seems the command fails or doesn't get executed at all. Where can I debug a log for this?
The file should clearly be something like this (note that there should be no [stop] section [see explanation above]):
[main]
@ type = oneshot
@ name = mount-zfs
@ description = "Import ZFS storage pools"
@ user = ( root )
@ depends = ( 00 )
@ options = ( env )
[start]
@ build = auto
@ execute =
(
execl-envfile ${conf_file}
ifelse -X { s6-test ${ZFS} = yes }
{
if { 66-echo -- importing ZFS pools... }
if { 66-which -q zfs }
if { zpool import -a }
66-echo -- ZFS pools imported successfully
}
66-echo -- [mount-zfs] deactivated
)
[environment]
conf_file=!/etc/66/boot.conf