I am Mallet.
And I know the system.
Let's assume I can provoke transactions at ~ one-second intervals,
and observe the resulting GUID integers.
So that's $g_i$ incrementing at $10^9 \pm \epsilon$, pretty regular.
Now suppose I provoke the system-under-test at ~ two-second intervals,
and then larger ones.
Some of my samples will exactly match $10^9$ seconds.
Despite the XOR, I believe a pattern will fall out of the observations,
revealing the constants, even $c$.
This is a little like Alvin Fernald inspecting a large number
of retail tags in a department store and inferring the
date and wholesale price fields.
Just use AES on GUIDs as they go out the door, storing them in an RDBMS.
Then when you see them again, it's an easy indexed lookup to recover the timestamp.
Postgres does a nice job of
storing
them in binary form.
Of course, if you're doing that you might as well have just
sent random bits out the door, knowing that the index will
order them nicely when they return to the roost.
You can even precompute a bunch of GUIDs spaced at regular intervals,
and peel them off as you need them, ignoring any that happened to expire.
What you really want is an ordering relationship.
The existing setup has a global variable: a clock that is NTP synchronized.
Discard that, in favor of a global counter.
Increment it by some random draw from a
Poisson process.
(Yes, you will have to choose a $\lambda$ value.)
Now Eve can see the happens-before relationship between GUIDs.
Is that bad in your use case?
There is (almost) no information about
"delta time between transactions" revealed over short timescales.
You may wish to obscure your system's transaction rate.
For example it may peak near lunch time and go quiescent at midnight.
So pick some hourly rate that exceeds the actual peak, and
ensure that the counter keeps up with that, even in the
absence of real transactions arriving.
The handler of a transaction can either produce or consume
a GUID that it is sending out the door.
It focuses on the transaction, while another thread on a separate core
worries about GUID bookkeeping and persistence.
Given an in-memory queue it's easy for plodding throughput-oriented
bookkeeping to keep up with bursty latency-sensitive transaction handling.
Just keep a few extras in the FIFO queue, use double-buffering, whatever.
If the handler produces fresh GUIDs, enqueuing them in a
queue with sufficient capacity, then the bookkeeper simply
reads and persists them.
If the consumer consumes GUIDs from a fixed-capacity queue,
the bookkeeper is always enqueuing another or blocked awaiting capacity.
You might possibly care to discard "stale" ones after an idle stretch,
or inject heart-beat transactions.
Kafka or the various Message Queue systems let the bookkeeper
reside on a nearby host.
There's many options for persistence, so your use case will
dictate the choice. You can use central or clustered Redis,
in-core or with a disk log. There's SQL and NoSQL.
Or fall back to the venerable filesystem.
In a similar "sharing work across threads" vein,
AES is not expensive, it won't cost you latency!
It will cost a background worker CPU cycles to fill a buffer
with encryptions of some counter value, but then
reading and XORing those bytes costs the transaction
handler almost nothing, right? Just don't let the buffer
run dry and you're OK. If you like, write 1 or 2 gig
to the filesystem, and mmap() that file.
You have plenty of bits for collision resistance.
If strict happens-before relationships are important to you,
then you might want to allocate some bits for a guid-producer-id,
e.g. {P1, P2, ..., Pn}.
Then you can only do causality comparisons among GUIDs
that came from the same producer.
The nanosecond-timestamp scheme would be stuck with this, too,
and we might talk about {C1, C2, ..., Cn} for the clock IDs.
NTP is very nice, but there will still be some amount of skew
which is trouble for causality inference.