If I'm not mistaken, you're using dd
for buffering purposes ... A loop might not be needed in your case ... dd
will continue reading and feeding the specified size blocks of data if you don't set a count
(This instructs dd
to exit after fulfilling the specified number of reads/writes) or iflag=nonblock
(You’d want Blocking I/O to successfully initiate the read and keep reading from a named pipe with dd
) and use it like so:
dd if=myFifo iflag=fullblock bs=65536 2> /dev/null | redis-cli -x PUBLISH myChannel
In this case, it should only exit when the end of the input file is reached(e.g. when the writer to the named pipe terminates/closes the pipe).
Or to keep the pipe constantly open waiting for writes, use it like so:
tail -c +1 -F myFifo | dd iflag=fullblock bs=65536 2> /dev/null | redis-cli -x PUBLISH myChannel
Or if your application expects end of stream/pipe(e.g. EOF
or close_write
... Which is BTW not the best choice for a streaming application), use GNU parallel instead like so:
tail -c +1 -F myFifo | parallel --max-procs 1 -P 1 -j 1 --pipe --block 64k -k 'redis-cli -x PUBLISH myChannel'
This should resemble your loop but only where you need it to … It will do so in a rather controlled and resources aware way … It should also keep the named pipe constantly open even between writes, preserve every bit of the stream and shorten the pipeline.