With the new x23 models being delivered, a lot of chat has started up again surrounding the NVMe slots on Synology Diskstations. Until now, the only official way to use these slots was as a read or read/write cache, not always something that was a) desired b) needed or c) actually that useful, given the way an associated volume would crash if the r/w cache failed. A better option (most agreed) would be to use those NVMe SSDs as a separate volume.
The DS923+ has been given that ability right out of the box, and can be set up from DSM. Officially, Synology has not provided support for any older models, but there is a way... and that way is detailed below.
The first thing needs to be said is that what follows below is a totally non-supported method to set up an SSD volume on your NAS using the NVMe slots, which means if there's an issue, Synology won't help you until you remove it. Make sure you have suitable backups of all your data before attempting this, and continue to backup data (as is good practice) once it's all working.
Also worth noting I completed this setup with DSM7.1 on a DS920+, however the same steps have been reported to work since DSM6 on various other
+ models since 2018 at least.
- A Diskstation with NVMe slots (check here to see if your model is on the list)
- 1 or 2 NVMe drives (I'm using Samsung 970 EVO Plus 500GB drives)
- Physical access to your NAS to install the drives
- SSH access to your NAS (click the link if you don't know how to do that)
- Sudo/root access (normally by using the same password as any user with admin permissions)
- Access to DSM
- Power down your NAS
- Unplug your unit from all power and data supplies
- Access your NVMe slots and install the NVMe SSD(s) (I have no affiliation with the video makers below, just something I found which I think helps you install the SSDs if you don't know how)
- Switch your NAS back on and wait for it to complete booting up
Setting up the NVMe disks for Storage Pool use
- SSH into your NAS with an admin user
cat /proc/mdstatand note how many
mdnumbers you have. You will need to use the next number in the sequence for all commands using
mdbelow (mine go up to
md2, so in the commands below I specify
- Type or copy/paste the following lines one at a time:
- The above uses 2x NVMe cards to create a single storage pool in RAID 1. You can accomplish a non-RAID volume by only running the commands for
nvme0n1, and in step 10 changing to
- Assuming you are creating a RAID array, run the line at step 11 every 5 minutes or so until it shows that the resync has been completed
mdalongside your existing ones
- Once complete, run the following lines one after the other:
12. echo 0 > /sys/block/md3/queue/rotational 13. mkfs.btrfs -f /dev/md3 (Formats the array as btrfs)
- When all that has been done, reboot your DiskStation and login to DSM
Storage Manager, and top left you should see
Available Pool 1under
Storage. Click it
- Click the three dots to the right of the screen, and then
Online Assemblein the drop-down box
Applyon the prompt which comes up:
- Your Storage Manager should now look something like this:
You now have your own SSD volume using NVMe SSD cards.
This is a setting (available on select NAS models including the DS920+) which aims to increase the efficiency and life of your SSD cards by increasing read and write performance.
To enable in DSM, you go to
Storage Manager, select the 3 dots to the right of your SSD storage pool and click
Settings. You will be shown a screen with a checkbox for
Enable TRIM which also allows you to set a schedule. Click what you need to and then
People claim to have been using this or a similar method from DSM6 days. Happily, they say this persists through updates, even into DSM7. Note however that there are a few others who have not had this experience, so it is still unsure exactly what will happen to your system.
It's also possible that future versions of DSM will block off this workaround. For those of you not yet willing to upgrade your machine, let's hope this doesn't happen.
As with all changes to your NAS, especially non-supported ones, you may experience data loss. I know I said it at the top of the article, but always have suitable backups.
If a drive in a regular RAID 1/5/SHR etc. crashes, you'd just pull that drive, insert a new one, navigate to that volume and hit
Repair. That's not possible in this instance as any new NVMe card inserted into the slot is automatically seen as for cache, not for storage.
To get your system to recognise it as for storage (and therefore can be used to rebuild the array) you need to SSH into your system and run the following separately:
synopartition --part /dev/nvme1n1 12 mdadm --manage /dev/md3 -a /dev/nvme0n1p3
md3. Modify your commands as necessary.
After a NAS reboot, you should then be able to see a 'repair' option in Storage Manager for your degraded SSD array.