Speed up your DS920+, 720+ or 420+ Synology NAS with an M.2 SSD Volume

Photo by Saffu / Unsplash

Boost your system speed by creating an SSD volume on a Synology 920+ using NVMe slots

Synology Dec 11, 2022

With the new x23 models being delivered, a lot of chat has started up again surrounding the NVMe slots on Synology Diskstations. Until now, the only official way to use these slots was as a read or read/write cache, not always something that was a) desired b) needed or c) actually that useful, given the way an associated volume would crash if the r/w cache failed. A better option (most agreed) would be to use those NVMe SSDs as a separate volume.

The DS923+ has been given that ability right out of the box, and can be set up from DSM. Officially, Synology has not provided support for any older models, but there is a way... and that way is detailed below.


The first thing needs to be said is that what follows below is a totally non-supported method to set up an SSD volume on your NAS using the NVMe slots, which means if there's an issue, Synology won't help you until you remove it. Make sure you have suitable backups of all your data before attempting this, and continue to backup data (as is good practice) once it's all working.

Also worth noting I completed this setup with DSM7.1 on a DS920+, however the same steps have been reported to work since DSM6 on various other + models since 2018 at least.


Prerequisites

  1. A Diskstation with NVMe slots (check here to see if your model is on the list)
  2. 1 or 2 NVMe drives (I'm using Samsung 970 EVO Plus 500GB drives)
  3. Physical access to your NAS to install the drives
  4. SSH access to your NAS (click the link if you don't know how to do that)
  5. Sudo/root access (normally by using the same password as any user with admin permissions)
  6. Access to DSM

1st Steps

  • Power down your NAS
  • Unplug your unit from all power and data supplies
  • Access your NVMe slots and install the NVMe SSD(s) (I have no affiliation with the video makers below, just something I found which I think helps you install the SSDs if you don't know how)
  • Switch your NAS back on and wait for it to complete booting up

Setting up the NVMe disks for Storage Pool use

  • SSH into your NAS with an admin user
  • Type cat /proc/mdstat and note how many md numbers you have. You will need to use the next number in the sequence for all commands using md below (mine go up to md2, so in the commands below I specify md3)
  • Type or copy/paste the following lines one at a time:
1.  ls /dev/nvme*             (Lists your NVMe drives)
2.  sudo -i                   (Type this, then type your password for Super User)
3.  fdisk -l /dev/nvme0n1     (Lists the partitions on NVMe1)
4.  fdisk -l /dev/nvme1n1     (Lists the partitions on NVMe2)
5.  synopartition --part /dev/nvme0n1 12    (Creates the Syno partitions on NVMe1)
6.  synopartition --part /dev/nvme1n1 12    (Creates the Syno partitions on NVMe2)
7.  fdisk -l /dev/nvme0n1     (Lists the partitions on NVMe1)
8.  fdisk -l /dev/nvme1n1     (Lists the partitions on NVMe2)
9.  cat /proc/mdstat          (Lists your RAID arrays/logical drives)
10. mdadm --create /dev/md3 --level=1 --raid-devices=2 --force /dev/nvme0n1p3 /dev/nvme1n1p3      (Creates the RAID array RAID 1 --level=1 RAID 0 --level=0)
11. cat /proc/mdstat          (Shows the progress of the RAID resync for md3 or md4)
Obviously don't include the parentheses ( ) parts
  • The above uses 2x NVMe cards to create a single storage pool in RAID 1. You can accomplish a non-RAID volume by only running the commands for nvme0n1, and in step 10 changing to --level=0 and --raid-devices=1
  • Assuming you are creating a RAID array, run the line at step 11 every 5 minutes or so until it shows that  the resync has been completed
💡
If you are not creating a RAID storage pool, step 11 will complete immediately as there is no raid to resync. The output should just show you your latest md alongside your existing ones 
  • Once complete, run the following lines one after the other:
12. echo 0 > /sys/block/md3/queue/rotational
13. mkfs.btrfs -f /dev/md3    (Formats the array as btrfs)
  • When all that has been done, reboot your DiskStation and login to DSM
  • Open Storage Manager, and top left you should see Available Pool 1 under Storage. Click it
  • Click the three dots to the right of the screen, and then Online Assemble in the drop-down box
  • Click Apply on the prompt which comes up:
  • Your Storage Manager should now look something like this:

You now have your own SSD volume using NVMe SSD cards.


Some notes

SSD TRIM

This is a setting (available on select NAS models including the DS920+) which aims to increase the efficiency and life of your SSD cards by increasing read and write performance.

To enable in DSM, you go to Storage Manager, select the 3 dots to the right of your SSD storage pool and click Settings. You will be shown a screen with a checkbox for Enable TRIM which also allows you to set a schedule. Click what you need to and then Save.

Persistency

People claim to have been using this or a similar method from DSM6 days. Happily, they say this persists through updates, even into DSM7. Note however that there are a few others who have not had this experience, so it is still unsure exactly what will happen to your system.

It's also possible that future versions of DSM will block off this workaround. For those of you not yet willing to upgrade your machine, let's hope this doesn't happen.

Data loss

As with all changes to your NAS, especially non-supported ones, you may experience data loss. I know I said it at the top of the article, but always have suitable backups.

Raid degradation

If a drive in a regular RAID 1/5/SHR etc. crashes, you'd just pull that drive, insert a new one, navigate to that volume and hit Repair. That's not possible in this instance as any new NVMe card inserted into the slot is automatically seen as for cache, not for storage.

To get your system to recognise it as for storage (and therefore can be used to rebuild the array) you need to SSH into your system and run the following separately:

synopartition --part /dev/nvme1n1 12
mdadm --manage /dev/md3 -a /dev/nvme0n1p3
💡
The above assumes it was the drive in the second slot (nvme1n1) and that your existing NVMe RAID array is md3. Modify your commands as necessary.

After a NAS reboot, you should then be able to see a 'repair' option in Storage Manager for your degraded SSD array.


Preshared Keys and more - SSH on Linux and Synology
An easy guide which explains how to access and use your SSH sessions to their fullest, including clients and logging in with preshared keys

PTS

With very limited knowledge, PTS fell down the selfhosted rabbit hole after buying his first NAS in October 2020. You can find him on the Synology discord server (click the icon below)

Have some feedback or something to add? Comments are welcome!

Please note comments should be respectful, and may be moderated