Synology_HDD_db: Storage (nvme) SSD no longer appears via synology after shutdown/reboot
i accidentally disconnected my NAS from power and when turning it back on the ssd storage is no longer recognized even tho it appears when rerunning the script.
i went ahead and ran restore then the script again:
root@istorage:/volume1/media/incomplete# sudo -s ./syno_hdd_db.sh --restore --showedits
Synology_HDD_db v3.4.84
DS723+ DSM 7.2.1-69057-3
StorageManager 1.0.0-0017
Using options: --restore --showedits
Running from: /volume1/media/incomplete/syno_hdd_db.sh
Restored support_memory_compatibility = yes
No backup of storage_panel.js found.
Restored ds723+_host_v7.db
Restored dx1211_v7.db
Restored dx1215ii_v7.db
Restored dx1215_v7.db
Restored dx1222_v7.db
Restored dx213_v7.db
Restored dx510_v7.db
Restored dx513_v7.db
Restored dx517_v7.db
Restored dx5_v7.db
Restored fax224_v7.db
Restored fx2421_v7.db
Restored rx1211rp_v7.db
Restored rx1211_v7.db
Restored rx1213sas_v7.db
Restored rx1214rp_v7.db
Restored rx1214_v7.db
Restored rx1216sas_v7.db
Restored rx1217rp_v7.db
Restored rx1217sas_v7.db
Restored rx1217_v7.db
Restored rx1222sas_v7.db
Restored rx1223rp_v7.db
Restored rx1224rp_v7.db
Restored rx2417sas_v7.db
Restored rx410_v7.db
Restored rx415_v7.db
Restored rx418_v7.db
Restored rx4_v7.db
Restored rx6022sas_v7.db
Restored rxd1215sas_v7.db
Restored rxd1219sas_v7.db
Restore successful.
root@istorage:/volume1/media/incomplete# sudo -s ./syno_hdd_db.sh -nr --showedits
Synology_HDD_db v3.4.84
DS723+ DSM 7.2.1-69057-3
StorageManager 1.0.0-0017
Using options: -nr --showedits
Running from: /volume1/media/incomplete/syno_hdd_db.sh
HDD/SSD models found: 1
ST8000VN004-3CP101,SC60
M.2 drive models found: 1
WD Red SN700 500GB,111150WD
No M.2 PCIe cards found
No Expansion Units found
ST8000VN004-3CP101 already exists in ds723+_host_v7.db
Edited unverified drives in ds723+_host_v7.db
Added WD Red SN700 500GB to ds723+_host_v7.db
Support disk compatibility already enabled.
Disabled support memory compatibility.
Max memory is set to 32 GB.
NVMe support already enabled.
M.2 volume support already enabled.
Disabled drive db auto updates.
"ST8000VN004-3CP101": {
"SC60": {
"compatibility_interval": [
{
"compatibility": "support",
"not_yet_rolling_status": "support",
"fw_dsm_update_status_notify": false,
"barebone_installable": true,
"smart_test_ignore": false,
"smart_attr_ignore": false
}
]
"WD Red SN700 500GB": {
"111150WD": {
"compatibility_interval": [
{
"compatibility": "support",
"not_yet_rolling_status": "support",
"fw_dsm_update_status_notify": false,
"barebone_installable": true,
"smart_test_ignore": false,
"smart_attr_ignore": false
}
]
DSM successfully checked disk compatibility.
You may need to reboot the Synology to see the changes.
any suggestions?
About this issue
- Original URL
- State: open
- Created 5 months ago
- Comments: 29 (15 by maintainers)
Do you have the script scheduled to run as root at boot-up? https://github.com/007revad/Synology_HDD_db/blob/main/how_to_schedule.md
Is this still an issue?
Try this debug verion: syno_hdd_debug.zip
You’ll need to run it via SSH.
EDIT You can skip this comment and the next 6 comments and jump straight to https://github.com/007revad/Synology_HDD_db/issues/237#issuecomment-1982148975
I think I know what’s going on.
In DSM 7.2.1 Update 2 or Update 3 Synology added a power limit to NVMe drives. It’s actually a maximum power limit and minimum power. Different Synology NAS models have different power limits.
On a real Synology you can check the power limit with:
Power limits I’ve seen in DSM 7.2.1 are:
It could be a case of that the 4TB WD BLACK SN850X needs too much power when it’s hot from being powered on for a while so the Synology refuses to mount it after a reboot. But if you boot when the WD BLACK is cooler DSM mounts it okay and everything is fine until you reboot.
It looks like either the WD Black NVMe drive is faulty.
Can you download the following test script: https://github.com/007revad/Synology_HDD_db/blob/test/nvme_check.sh
Then run nvme_check.sh and report back the output.
Then enable the syno_hdd_db schedule, reboot, run nvme_check.sh and report back the output.
Disable the syno_hdd_db schedule for now.
I’ll think up some things for to test with and without the script so we can see what the difference is.
Are both WD Red SN700 500GB and WD_BLACK SN850X 4000GB missing after a reboot?
What happens if you disable the syno_hdd_db schedule and then reboot?
BTW if you change the schedule to use
-nre --autoupdate=3
that will get rid of the�[0
in the output.ended up following the steps here: