How to implement "generators" like $RANDOM?
ksh93
has disciplines which are typically used for this kind of thing. With zsh
, you could hijack the dynamic named directory feature:
Define for instance:
zsh_directory_name() {
case $1 in
(n)
case $2 in
(incr) reply=($((++incr)))
esac
esac
}
And then you can use ~[incr]
to get an incremented $incr
each time:
$ echo ~[incr]
1
$ echo ~[incr] ~[incr]
2 3
Your approach fails because in head -1 /tmp/ints
, head opens the fifo, reads a full buffer, prints one line, and then closes it. Once closed, the writing end sees a broken pipe.
Instead, you could either do:
$ fifo=~/.generators/incr
$ (umask 077 && mkdir -p $fifo:h && rm -f $fifo && mkfifo $fifo)
$ seq infinity > $fifo &
$ exec 3< $fifo
$ IFS= read -rneu3
1
$ IFS= read -rneu3
2
There, we leave the reading end open on fd 3, and read
reads one byte at a time, not a full buffer to be sure to read exactly one line (up to the newline character).
Or you could do:
$ fifo=~/.generators/incr
$ (umask 077 && mkdir -p $fifo:h && rm -f $fifo && mkfifo $fifo)
$ while true; do echo $((++incr)) > $fifo; done &
$ cat $fifo
1
$ cat $fifo
2
That time, we instantiate a pipe for every value. That allows returning data containing any arbitrary number of lines.
However, in that case, as soon as cat
opens the fifo, the echo
and the loop is unblocked, so more echo
could be run, by the time cat
reads the content and closes the pipe (causing the next echo
to instantiate a new pipe).
A work around could be to add some delay, like for instance by running an external echo
as suggested by @jimmij or add some sleep
, but that would still not be very robust, or you could recreate the named pipe after each echo
:
while
mkfifo $fifo &&
echo $((++incr)) > $fifo &&
rm -f $fifo
do : nothing
done &
That still leaves short windows where the pipe doesn't exist (between the unlink()
done by rm
and the mknod()
done by mkfifo
) causing cat
to fail, and very short windows where the pipe has been instantiated but no process will ever write again to it (between the write()
and the close()
done by echo
) causing cat
to return nothing, and short windows where the named pipe still exists but nothing will ever open it for writing (between the close()
done by echo
and the unlink()
done by rm
) where cat
will hang.
You could remove some of those windows by doing it like:
fifo=~/.generators/incr
(
umask 077
mkdir -p $fifo:h && rm -f $fifo && mkfifo $fifo &&
while
mkfifo $fifo.new &&
{
mv $fifo.new $fifo &&
echo $((++incr))
} > $fifo
do : nothing
done
) &
That way, the only problem is if you run several cat at the same time (they all open the fifo before our writing loop is ready to open it for writing) in which case they will share the echo
output.
I would also advise against creating fixed name, world readable fifos (or any file for that matters) in world writable directories like /tmp
unless it's a service to be exposed to all users on the system.
If you want to execute code whenever the value of a variable is read, you can't do that inside zsh itself. The RANDOM
variable (like other similar special variables) is hard-coded in the zsh source code. You can however define similar special variables by writing a module in C. Many of the standard modules define special variables.
You can use a coprocess to make a generator.
coproc { i=0; while echo $i; do ((++i)); done }
for ((x=1; x<=3; x++)) { read -p n; echo $n; }
However this is pretty limited because you can only have one coprocess. Another way to progressively get output from a process is to redirect from a process substitution.
exec 3< <(i=0; while echo $i; do ((++i)); done)
for ((x=1; x<=3; x++)) { read n <&3; echo $n; }
Note that head -1
does not work here, because it reads a whole buffer, prints out what it likes, and exits. The data that's been read from the pipe remains read; this is an intrinsic property of pipes (you can't stuff data back in). The read
builtin avoids this issue by reading one byte at a time, which allows it to stop as soon as it finds the first newline but is very slow (of course that doesn't matter if you're just reading a few hundred bytes).