Score:0

How to monitor Segate HDD health under Linux?

cn flag

I'd need to monitor the health of several Seagate ST16000NM002G SAS HDDs hosted in a disk server running CentOS 7. As far as I understand, Seagate disks do not expose S.M.A.R.T. attributes due to a precise management decision (see this page), and the company suggests to use their SeaTool software which according to them is more reliable than S.M.A.R.T. Sadly, it seems that only the SSD version of SeaTool is available for Linux (see this page).

Since I'd say that Segate+Linux should be a fairly common case in modern data centers I'm pretty sure that some reliable monitoring tool for Seagate disks must be available for Linux. Can anybody provide some insight, please?

Edit: this is what I get with smartctl for the Seagate disks:

$ sudo smartctl -A /dev/sda
smartctl 7.0 2018-12-30 r4883 [x86_64-linux-3.10.0-1160.53.1.el7.x86_64] (local build)
Copyright (C) 2002-18, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
Current Drive Temperature:     33 C
Drive Trip Temperature:        60 C

Manufactured in week 42 of year 2020
Specified cycle count over device lifetime:  50000
Accumulated start-stop cycles:  20
Specified load-unload count over device lifetime:  600000
Accumulated load-unload cycles:  3324
Elements in grown defect list: 0

while for a Toshiba HDD on another machine:

$ sudo smartctl -A /dev/sdb
smartctl 7.1 2020-04-05 r5049 [x86_64-linux-4.18.0-348.12.2.el8_5.x86_64] (local build)
Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000b   100   100   050    Pre-fail  Always       -       0
  2 Throughput_Performance  0x0005   100   100   050    Pre-fail  Offline      -       0
  3 Spin_Up_Time            0x0027   100   100   001    Pre-fail  Always       -       7019
  4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       34
  5 Reallocated_Sector_Ct   0x0033   100   100   050    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000b   100   100   050    Pre-fail  Always       -       0
  8 Seek_Time_Performance   0x0005   100   100   050    Pre-fail  Offline      -       0
  9 Power_On_Hours          0x0032   062   062   000    Old_age   Always       -       15428
 10 Spin_Retry_Count        0x0033   100   100   030    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       34
191 G-Sense_Error_Rate      0x0032   100   100   000    Old_age   Always       -       0
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       32
193 Load_Cycle_Count        0x0032   100   100   000    Old_age   Always       -       39
194 Temperature_Celsius     0x0022   100   100   000    Old_age   Always       -       31 (Min/Max 15/39)
196 Reallocated_Event_Count 0x0032   100   100   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0032   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0030   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x0032   200   253   000    Old_age   Always       -       0
220 Disk_Shift              0x0002   100   100   000    Old_age   Always       -       0
222 Loaded_Hours            0x0032   062   062   000    Old_age   Always       -       15427
223 Load_Retry_Count        0x0032   100   100   000    Old_age   Always       -       0
224 Load_Friction           0x0022   100   100   000    Old_age   Always       -       0
226 Load-in_Time            0x0026   100   100   000    Old_age   Always       -       648
240 Head_Flying_Hours       0x0001   100   100   001    Pre-fail  Offline      -       0

I would expect something like the latter in order to be able to set up a proper (even if not accurate or reliable) monitoring.

Score:0
br flag

This article says they expose SMART attributes normally, but for handwavy reasons only SeaTool knows how to interpret them beyond pass/fail.

To some extent, that is true for the SMART attributes of any disk as only the computed value is machine-readable, and the interpretation of the "raw" value is somewhat undefined. Temperature_Celsius is obvious, but the integration time for the various "error rate" attributes is vendor dependent, and so are the thresholds. SeaTools knows how to interpret raw values, that's it basically.

I doubt they'd be selling many harddisks if SMART support was missing or inaccurate, the vast majority of server disks go into RAID arrays where SMART is the only monitoring standard available.

They might be able to stack their own analysis software on top of a RAID controller, but if it doesn't integrate with minimum effort into existing monitoring solutions that provide a dashboard for the entire datacenter, it will be a niche solution for the hobbyist market.

This is one instance of the class of problems I call "top-of-the-foodchain" problems, where multiple software components are written to be the primary user interface, while the user requires them to be integrated into a larger system.

cn flag
I understand, but smartctl -A reports no vendor specific attributes with worst and threshold values; it just reports the current and trip temperatures and some figures like accumulated start-stop cycles and accumulated load-unload cycles without any reference range. So how can S.M.A.R.T. infer something (even not precise) about the disk health status? In other words, I'm afraid that setting up an automated health check system based on S.M.A.R.T. might be useless since the disks do not provide enough information.
br flag
@NicolaMori, SMART expresses the reference range by normalizing the values, so for vendor-independent monitoring, all you need to check is whether the current and worst values are above the threshold, and whether they are moving towards it and how fast.
br flag
FWIW, I don't really bother with most of that monitoring beyond drawing pretty graphs. I have several disks that got ejected from my RAID for failing to hold data, but that look absolutely fine in SMART. All you get from SMART is an advance warning, sometimes.
cn flag
The issue is just that with smartctl -A I don't get any value, just the temperature, so there's nothing to be monitored. See the edit on my original post. The disk S.M.A.R.T. support is advertised as available and enabled.
br flag
@NicolaMori, I have researched this a bit more -- the `-A` attribute dump is ATA/SATA specific and will not work for SAS drives. The `-x` extensive dump should show a bit more information, but it works a bit differently there. The `smartctl` manual page has a few comments where it says `[ATA]` or `[SCSI]` to highlight the differences.
cn flag
Thank you very much for your help. I tried with `-x` and also with `-d scsi` but in the end no more info is printed in the SMART DATA SECTION. I guess that smart just behaves differently for SAS drives, and that the only available metrics are start-stop cycles, load-unload cycles and elements in the defect list as reported in my initial post. I guess I'll simply look at the synthetic health status (smartctl -H), it seems the result is not worth the effort here. Thanks again!
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.