TrueNAS SCALE¶
Specs¶
- MOBO: Asus Pro WS X570 ACE
- CPU: Ryzen 5900x
- RAM: 32 GB
- BOOT: Samsung 970 EVO 250 GB
- NIC: Intel X520
- GPU: NVidia Titan X
Setup¶
Prerequisites¶
- Create iSCSI VLANs (21, 22) on switch. Set SFP+ port to TAGGED for LAB, iSCSI_1, iSCSI_2
- Set up gateway on switch for vlan21 and vlan22
- Set up default routes on switch for vlan21 and vlan22
Networking¶
- Set up basic networking for Web console on
enp8s0
: 10.2.1.1/16 - Assuming standard 1G Ethernet
enp8s0
and Intel x520-1 10G NICenp4s0
(ref): - define Static Routes:
- 10.2.0.0/16 to 10.2.0.1 (to OPNsense LAB interface)
- 10.21.21.0/24 to 10.21.21.1 (to iSCSI gateway on 10G switch)
- 10.22.22.0/24 to 10.22.22.1 (to iSCSI gateway on 10G switch)
enp8s0
: this is the standard interface we created in step 1 aboveenp9s0f1
: connect to LAN as pseudo 'iPMI' interface; assign IP in LAN/MGMT subnet (or do not assign in TrueNAS)enp4s0
: do not assign IP- add
systemctl restart ix-netif.service
as post-init command in Data Protection > Init/Shutdown Scripts so that manual reconfig is not required every reboot (v20.12) - create VLAN interfaces:
- vlan21 for iSCSI_1; assign IP in 10.21.21.0/24
- vlan22 for iSCSI_2; assign IP in 10.22.22.0/24
Storage Pool¶
- In console, run
geom disk list
to see names/IDs of all identified disks - Set up zpool
- (OPTIONAL)Set NVME ssd as L2ARC with
zpool add <pool_name> cache /dev/<drive_id>
- Set OPTANE as SLOG with
zpool add <pool_name> log /dev/<drive_id>
- First Overprovision drive] with
disk_resize <device> <size>
SMART and SCRUB tasks¶
iSCSI shares¶
- Follow wizard:
- Select appropriate zvol; configure for VMWare
- For Portal, set IP addresses to addressess assigned to iSCSI vlans
- For Initiators, set authorized networks to 10.2.0.0/20, 10.21.21.0/24, and 10.22.22.0/24
Refs:
- https://www.servethehome.com/building-a-lab-part-3-configuring-vmware-esxi-and-truenas-core/ democratic-csi/README.md
NFS shares¶
- Create Dataset in Storage for appropriate pool
- Create share in Shares > NFS
- Ensure mapall user and mapall group are
root
- Ensure permissions are allowed for internal networks
SMB share / Time Machine volume¶
- Create
timemachine
user and group - Create dataset for share
- Grant "timemachine" group full control of timemachine dataset through the ACL editor
View Permissions
- Update owner to
timemachine
Set ACL
-> Use ACL PresetPOSIX - Restricted
- Create
SMB
share pointing totimemachine
dataset; set asmulti-user time machine
share - Restart SMB server
- Set quota on Mac for auto pruning:
- Identify
timemachine
destination ID:tmutil destinationinfo
- Set quota:
sudo tmutil setquota <DESTINATION_ID> <QUOTA_IN_GB>
Refs:
- https://www.truenas.com/community/threads/multi-user-time-machine-purpose.99276/#post-684995
- https://www.reddit.com/r/MacOS/comments/lh0yjc/configure_a_truenas_core_share_as_a_time_machine/
S3 with Minio¶
As of TrueNAS-SCALE-22.12.3.1
, the integrated Minio/S3 service is deprecated. Instead, iXSystems suggests using the Minio app (which deploys a container via a TrueNAS k8s cluster).
Documentation is poor; the community forum has a decent walkthrough, replicated here for reference.
If migrating from integrated Minio to Minio app, deploy both to migrate data. Ensure the 'new' Minio uses nonstandard ports so there is no overlap/collision.
-
Create a new ZFS dataset for minio
-
Using the shell, create 2 directories within that dataset:
mkdir -p </path/to/minio/>{certs,data}
-
Create a TrueNAS cron job (System Settings → Advanced → Cron Jobs):
Key | Value |
---|---|
Description | Scrutiny |
Command | cp /etc/certificates/le-prod-cert.crt /mnt/ssdpool/minio/certs/public.crt && cp /etc/certificates/le-prod-cert.key /mnt/ssdpool/minio/certs/private.key && chmod 444 /mnt/ssdpool/minio/certs/private.key |
Run As User | root |
Schedule | Daily |
- Create a minio deployment:
- Apps > search "Minio" > Install
-
Key Value Application Name Minio Version (whatever is latest) [1.7.16] Workload Configuration Update Strategy Create new pods and then kill old ones Minio Configuration Enable Distributed Mode disabled
[needs 4 instances]Minio Extra Arguments No items have been added Root User Access Key: [lowercase only] Root Password Security Key: [alpha-numeric only] Minio Image Environment No items have been added Minio Service Configuration Port default: 9000
Console Port default: 9002
Log Search API Configuration Enable Log Search API Disabled
[Requires Postgres Database]Storage Minio Data Mount Point Path /export
Host Path for Minio Data Volume Enabled
Host Path Data Volume </path/to/minio>/data
Extra Host Path Volumes Mount Path in Pod /etc/minio/certs
Host Path <path/to/minio>/certs/
Postgres Storage Postgres Data Volume Disabled
Postgres Backup Volume Disabled
Advanced DNS Settings DNS Configuration / DNS Options No items have been added Resource Limits Enable Pod resource limits Disabled
-
Edit the minio deployment to set deployment status probes to use
HTTPS
sh k3s kubectl edit deployment.apps/minio -n ix-minio
Edit with vi -- use i
to enter insert mode, esc
to exit, and :wq
to save and quit
yaml livenessProbe: failureThreshold: 5 httpGet: path: /minio/health/live port: 9001 scheme: HTTP ... readinessProbe: failureThreshold: 5 httpGet: path: /minio/health/live port: 9001 scheme: HTTP ... startupProbe: failureThreshold: 60 httpGet: path: /minio/health/live port: 9001 scheme: HTTP ...
-
If replacing built-in Minio service, replicate Minio deployments
-
Sync configurations (if needed) (NOTE: skipped this step)
sh mc admin config export <old> > config.txt # edit as needed mc admin config import <new> < config.txt
-
Compare policies and sync
sh mc admin policy list <old> mc admin policy list <new> # if replication needed mc admin policy info <old> <policyname> -f <policyname>.json mc admin policy add <new> <policyname> <policyname>.json
-
Export/add users
sh mc admin user list <old> # if replication needed mc admin user add <new> <name> # this will prompt for secret key
-
Scale down k8s resources that require minio:
sh # suspend flux suspend hr -n default --all \ && kubectl scale deploy -n default --replicas=0 --all \ && kubectl annotate cluster postgres -n default --overwrite cnpg.io/hibernation=on flux suspend hr -n datasci --all \ && kubectl scale deploy -n datasci --replicas=0 --all \ && kubectl annotate cluster datasci -n datasci --overwrite cnpg.io/hibernation=on flux suspend hr -n monitoring --all \ && kubectl scale deploy -n monitoring --replicas=0 --all flux suspend hr -n volsync --all \ && kubectl scale deploy -n volsync --replicas=0 --all
-
Mirror data
NOTE: run k8s scale-down (see below) before mirroring!
sh mc mirror --preserve <old> <new>
-
Stop and disable S3 Service to prevent it from restarting
-
Stop MinIO Application and adjust ports as needed to replace the S3 Service
- Adjust in TrueNAS
- Update
mc
config.json
-
Scale deployements back up
sh # resume kubectl annotate cluster postgres -n default cnpg.io/hibernation- kubectl annotate cluster datasci -n datasci cnpg.io/hibernation- flux resume hr -n default --all flux resume hr -n datasci --all flux resume hr -n monitoring --all flux resume hr -n volsync --all
-
Update prometheusconfig for new minio
sh mc admin prometheus generate <new>
-
Allow Minio to scrape its own metrics Set 2 environment variables on the MinIO application configuration:
TrueNAS GUI > Apps > Applications > MINIO > Edit
Minio Image Environment Set the following two environment variables: MINIO_PROMETHEUS_URL --> `https://prometheus.${SECRET_DOMAIN}` MINIO_PROMETHEUS_JOB_ID --> job name (`truenas-minio`)
Refs:
- https://www.truenas.com/community/threads/truenas-scale-s3-service-to-minio-application-migration.110787/
- https://www.truenas.com/docs/scale/scaletutorials/apps/communityapps/minioclustersscale/minioclustering/
- https://www.truenas.com/docs/scale/scaletutorials/apps/communityapps/minioclustersscale/miniomanualupdate/
Enable WebDav share to host files¶
PXEboot server See pxe.md
- Create
pxeboot
dataset- Create
webdav
share forpxeboot
dataset (NOTE: TrueNAS now requires WebDAV install as container app)
Troubleshooting¶
SMART test controls¶
Assuming drive named 'sdb' smartctl -a /dev/sdb
(show all smart attributes) smartctl -t short /dev/sdb
(perform short smart check) smartctl -t long /dev/sdb
(perform long smart check) smartctl -c /dev/sdb
(show how long tests would take, not entirely accurate) smartctl -l selftest /dev/sdb
(show only test results versus smartctl -a which shows everything) smartctl -X /dev/sdb
(stops test in progress.)
Hint: if results are too long to scroll, append | more
to the end of the command to paginate
Here's a loop to keep the drive spun up if you use a USB dock that puts the drive to sleep after a period of time. Use Ctrl-C to break. (not necessarily FreeNAS related)
while true; do clear; smartctl -l selftest /dev/sdb; sleep 300; done
Read multiple smart reports using "save_smartctl.sh":
#!/bin/bash
### call script with "save_smartctl.sh /path/to/outfile"
# Declare a string array with type
declare -a DiskArray=("sda" "sdb" "sdc" "sdd" "sde" "sdf" "sdg" "sdh" "sdi" "skj" "sdk" "sdl" "sdm")
# Read the array values with space
for val in "${DiskArray[@]}"; do
smartctl -a /dev/${val} >> $1
done
QOL Changes¶
- Change timeout for session
- Allow
apt
install:chmod +x /bin/apt*
- Install Eternal Terminal
Expanding VM Disk¶
If you have a Linux VM, which uses the LLVM filesystem, you can easily increase the disk space available to the VM.
Linux Logical Volume Manager allows you to have logical volumes (LV) on top of logical volume groups (VG) on top of physical volumes (PV) (ie partitions).
This is conceptually similar to zvols on pools on vdevs in zfs.
NOTE: These commands may require root or 'sudo' access
Useful commands¶
pvs # list physical volumes
lvs # list logical volumes
lvdisplay # logical volume display
pvdisplay # physical volume display
df # disk free space
- Get current status
sh df -h # get human-readable disks lvs # view logical volumes pvs # view physical volumes
In this example, we assume the
ubuntu-lv
LV is on theubuntu-vg
VG is on the PV/dev/sda3
(that's partition 3 of device sda)
-
Shutdown the VM.
-
Edit the ZVOL in TrueNAS to change the size.
-
Restart the VM.
-
In the VM, run
parted
with the device ID, repair the GPT information and resize the partition, as per below.
```sh parted /dev/sda
### in 'parted' ### # show partitions print
# parted will offer to fix the GPT. Run fix with f
# resize the partition (we use 3 because '/dev/sda3') resizepart 3 100%
# exit 'parted'
```
- Now that the partition table has been resized, we have to resize the physical volume
sh pvdisplay # get current status pvresize /dev/sda3 # resize pvdisplay # check work
- Use 'lvextend' to resize the LV and resize the the filesystem over the resized Physical Volume.
sh lvextend --resizefs ubuntu-vg/ubuntu-lv /dev/sda3
- Finally... you can check the freespace again.
sh df -h