Timing Things
It appears to be the obvious, but according to Falsehoods Programmers Believe About Time, due to lack of knowledge or support for a monotonically time source, developers often use system time to measure how long a process takes, which can account for incorrect measures, if between the two measurements the system clock has changed, like a value which is:
- slightly bigger than what was the correct;
- negative (which can likely crash system rellying in a positive value);
- a hugely bigger than what is correct due to the integer signal bit flip for negative numbers and a incorrect type conversion (a signed -1 has the same memory representation as the maximum unsigned integer value
0xFFFFFFFF
)
// this C code prints 4294967295
printf("%u", (unsigned int) -1);
// this comparison is true
if (0xFFFFFFFF == (unsigned int) -1) {
}
Cache Invalidation
As a wise man once said:
There are only two hard things in Computer Science: cache invalidation and naming things. Phil Karlton
Often distributed systems, use a very short time for cache invalidation, which can be a conservative value, like 60s or higher (which most of dynamic DNS uses), down to a few milliseconds.
Using NTP you can ensure that all computers in a local network are synced down bellow a millisecond or a few milliseconds over internet regarding the correct UTC time.
With that in mind, even an submillisecond call to a cache server in the local network (like Redis) can also be cached in local memory for nanosecond response time.
However, there is a thing called Leap Second which makes this kind of aggressive millisecond caching very hard, as the reference clock can either jump one second ahead or behind the current clock.
The difference between 1s or -1s could mean that the current value that we thing is correct is not the more recent value or a value that is still correct are treated as if it is already too old, causing the system to query the source of true too often, slowing down the system or even crashing it.