we want to understand what are the aspects when net.core.netdev_max_backlog
kernel value is very low and not as recommended
on our Linux RHEL machines , the value for this parameter is 1000
since our machines are HADOOP machines ( BIGDATA cluster )
we saw that best practice is to increase the value to 65536
as describe on:
https://datasayans.wordpress.com/2015/11/04/performance-kernel-tuning-for-hadoop-environment/
background:
The kernel parameter “netdev_max_backlog” is the maximum size of the receive queue. The received frames will be stored in this queue after taking them from the ring buffer on the NIC. Use high value for high speed cards to prevent loosing packets. In real time application like SIP router, long queue must be assigned with high speed CPU otherwise the data in the queue will be out of date (old).
so - what could be the aspects when this kernel parameter is with un-Insufficient value ?
other - reference - https://gist.github.com/leosouzadias/e37cd189794bb78de502ac25cb605576
https://community.cloudera.com/t5/Community-Articles/OS-Configurations-for-Better-Hadoop-Performance/ta-p/247300
https://www.senia.org/2016/02/28/hadoop-and-redhat-system-tuning-etcsysctl-conf/
https://mapredit.blogspot.com/2014/11/hadoop-server-performance-tuning.html
https://gist.github.com/phaneesh/38b3d80b38cc76abb1d010f598fbc90a
https://docs.datastax.com/en/dse/5.1/dse-dev/datastax_enterprise/config/configRecommendedSettings.html
PDF- https://www.cisco.com/c/dam/en/us/solutions/collateral/data-center-virtualization/big-data/cloudera-intel-cisco-hadoop-benchmark.pdf