Score:1

HDP cluster + journal nodes get out of Sync

gb flag

we have HDP cluster version 2.6.5

when we look on name-node logs we can see the following warning

2023-02-20 15:56:37,731 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(142)) - Finalizing edits file /hadoop/hdfs/journal/hdfsha/current/edits_inprogress_0000000193594484455 -> /hadoop/hdfs/journal/hdfsha/current/edits_0000000193594484455-0000000193594600017
2023-02-20 15:58:31,377 WARN  server.Journal (Journal.java:journal(398)) - Sync of transaction range 193594757835-193594757835 took 1498ms
2023-02-20 15:58:40,617 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(142)) - Finalizing edits file /hadoop/hdfs/journal/hdfsha/current/edits_inprogress_0000000193594600018 -> /hadoop/hdfs/journal/hdfsha/current/edits_0000000193594600018-0000000193594769398
2023-02-20 16:00:39,037 WARN  server.Journal (Journal.java:journal(398)) - Sync of transaction range 193594895192-193594895192 took 1371ms
2023-02-20 16:00:42,839 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(142)) - Finalizing edits file /hadoop/hdfs/journal/hdfsha/current/edits_inprogress_0000000193594769399 -> /hadoop/hdfs/journal/hdfsha/current/edits_0000000193594769399-0000000193594899457
2023-02-20 16:01:43,962 WARN  server.Journal (Journal.java:journal(398)) - Sync of transaction range 193594954980-193594954980 took 1329ms
2023-02-20 16:02:44,799 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(142)) - Finalizing edits file /hadoop/hdfs/journal/hdfsha/current/edits_inprogress_0000000193594899458 -> /hadoop/hdfs/journal/hdfsha/current/edits_0000000193594899458-0000000193595017147
2023-02-20 16:02:47,129 WARN  server.Journal (Journal.java:journal(398)) - Sync of transaction range 193595018764-193595018764 took 1321ms
2023-02-20 16:03:52,763 WARN  server.Journal (Journal.java:journal(398)) - Sync of transaction range 193595106645-193595106646 took 1344ms
2023-02-20 16:04:46,965 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(142)) - Finalizing edits file /hadoop/hdfs/journal/hdfsha/current/edits_inprogress_0000000193595017148 -> /hadoop/hdfs/journal/hdfsha/current/edits_0000000193595017148-0000000193595169050
2023-02-20 16:04:56,276 WARN  server.Journal (Journal.java:journal(398)) - Sync of transaction range 193595175233-193595175233 took 1678ms
2023-02-20 16:06:01,067 WARN  server.Journal (Journal.java:journal(398)) - Sync of transaction range 193595252052-193595252052 took 1265ms
2023-02-20 16:07:06,447 WARN  server.Journal (Journal.java:journal(398)) - Sync of transaction range 193595320796-193595320796 took 1273ms

in our HDP cluster , HDFS service include 2 name-node services and 3 journal-Nodes cluster include 736 data nodes machines , and HDFS service is the manager of all data-node

we want to understand what is the reason for the following warning ?

 server.Journal (Journal.java:journal(398)) - Sync of transaction range 193595018764-193595018764 took 1321ms

and how to avoid this messages by proactive solution

from what we found so far is the following solution :

http://www.hadoopadmin.co.in/hdfs/standby-namenode-is-faling-and-only-one-is-running/

RESOLUTION:
Increase the values of following JournalNode timeout properties:
dfs.qjournal.select-input-streams.timeout.ms = 60000 
dfs.qjournal.start-segment.timeout.ms = 60000 
dfs.qjournal.write-txns.timeout.ms = 60000
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.