I'd need to monitor the health of several Seagate ST16000NM002G SAS HDDs hosted in a disk server running CentOS 7. As far as I understand, Seagate disks do not expose S.M.A.R.T. attributes due to a precise management decision (see this page), and the company suggests to use their SeaTool software which according to them is more reliable than S.M.A.R.T. Sadly, it seems that only the SSD version of SeaTool is available for Linux (see this page).
Since I'd say that Segate+Linux should be a fairly common case in modern data centers I'm pretty sure that some reliable monitoring tool for Seagate disks must be available for Linux. Can anybody provide some insight, please?
Edit: this is what I get with smartctl for the Seagate disks:
$ sudo smartctl -A /dev/sda
smartctl 7.0 2018-12-30 r4883 [x86_64-linux-3.10.0-1160.53.1.el7.x86_64] (local build)
Copyright (C) 2002-18, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF READ SMART DATA SECTION ===
Current Drive Temperature: 33 C
Drive Trip Temperature: 60 C
Manufactured in week 42 of year 2020
Specified cycle count over device lifetime: 50000
Accumulated start-stop cycles: 20
Specified load-unload count over device lifetime: 600000
Accumulated load-unload cycles: 3324
Elements in grown defect list: 0
while for a Toshiba HDD on another machine:
$ sudo smartctl -A /dev/sdb
smartctl 7.1 2020-04-05 r5049 [x86_64-linux-4.18.0-348.12.2.el8_5.x86_64] (local build)
Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF READ SMART DATA SECTION ===
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000b 100 100 050 Pre-fail Always - 0
2 Throughput_Performance 0x0005 100 100 050 Pre-fail Offline - 0
3 Spin_Up_Time 0x0027 100 100 001 Pre-fail Always - 7019
4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 34
5 Reallocated_Sector_Ct 0x0033 100 100 050 Pre-fail Always - 0
7 Seek_Error_Rate 0x000b 100 100 050 Pre-fail Always - 0
8 Seek_Time_Performance 0x0005 100 100 050 Pre-fail Offline - 0
9 Power_On_Hours 0x0032 062 062 000 Old_age Always - 15428
10 Spin_Retry_Count 0x0033 100 100 030 Pre-fail Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 34
191 G-Sense_Error_Rate 0x0032 100 100 000 Old_age Always - 0
192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 32
193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 39
194 Temperature_Celsius 0x0022 100 100 000 Old_age Always - 31 (Min/Max 15/39)
196 Reallocated_Event_Count 0x0032 100 100 000 Old_age Always - 0
197 Current_Pending_Sector 0x0032 100 100 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0030 100 100 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x0032 200 253 000 Old_age Always - 0
220 Disk_Shift 0x0002 100 100 000 Old_age Always - 0
222 Loaded_Hours 0x0032 062 062 000 Old_age Always - 15427
223 Load_Retry_Count 0x0032 100 100 000 Old_age Always - 0
224 Load_Friction 0x0022 100 100 000 Old_age Always - 0
226 Load-in_Time 0x0026 100 100 000 Old_age Always - 648
240 Head_Flying_Hours 0x0001 100 100 001 Pre-fail Offline - 0
I would expect something like the latter in order to be able to set up a proper (even if not accurate or reliable) monitoring.