Run every bash command through a function, or wrapper
Bash isn't quite that flexible, you can run commands/functions before and after each command line - but capturing output in bash alone is non-trivial.
Some problems include:
- writing to a pipe is not the same as writing to a terminal or file (e.g.
ioctl()
,fseek()
,fstat()
), many programs detect this (likeman
, and evenls
). You won't want to use this with an interactive editor, likevi
. - special commands like "
time
", pipelines and even shell loops can complicate solutions, and pipelines alter return codes - you can't easily "hijack" every possible command with an alias, and such an alias would break command line options/arguments because
alias
only lexically replaces the command - you can't just (over)write a single file, i.e. is there a race when you issue a command to inspect the last output?
- if you want to capture
stdout
andstderr
you can have an interleaving problem - if you use a pipe, remember bash creates subshells when there are pipes, and certain other redirections, one immediate side effect is for variable assignment: these be assigned in a subshell instead (and then thrown away, leaving it unchanged in your current shell). "
.
" (source
) andexit
are similarly affected. (In the option below using readline, Ctrl-j can be used to run such commands.)
This is (roughly) what I use for a similar problem, functions called +
, ++
etc. so I can prefix just the commands for which I want to capture/process the output of, e.g. to the X selection or clipboard:
function +() (
set -f # no double expand
eval "$@" | tee >( tr -s "\n" " " | xclip )
return ${PIPESTATUS[0]} # real rc
)
e.g. + find /tmp -name "*.pdf" -mtime -30
It's imperfect, but mostly does what I need.
Above, @mosvy recommends script
, this creates a new tty (terminal) in which a program is run which sidesteps all these problems. It writes all output to the parent tty but also records it to a file. It records either an entire session, or a single command, so you still have the problem of how to automagically invoke it.
Instead, use screen
(because unlike script
it's possible to interact with and control a running instance) and bash's PROMPT_COMMAND
:
if [[ "$TERM" -eq "screen" && -n "$STY" ]]; then
PROMPT_COMMAND=screenlog
SCREENLOG="${HOME}/screenlog"
fi
function screenlog() {
[[ -n "$STY" ]] && {
screen -S "$STY" -X log off
[[ -f "${SCREENLOG:=$HOME/screenlog}" ]] && mv "$SCREENLOG" /tmp/last_command
screen -S "$STY" -X logfile "$SCREENLOG"
screen -S "$STY" -X log on
}
return 0
}
If the above is in your .bashrc
, you should be able to just start screen
. The above function is invoked each time bash displays a new command prompt. It checks first that it is running under screen, then it talks to screen to issue commands to stop logging, writes the last output to /tmp/last_command
, then tells screen to start logging.
One caveat is that because PROMPT_COMMAND
s are invoked before the prompt is displayed, both the prompt and the command (including edits such as backspace) will appear at the start of the log file. One way around that is to emulate "pre-exec" with a DEBUG trap to run the above instead, like: Modify all bash commands through a program before executing them .
If you run "interactive" or terminal aware programs (e.g. man
or top
) you'll get the terminal control sequences recorded too, useful if you want to playback output (though script
/scriptreplay
are better in this respect).
Under the hood, bash uses readline, this allows keys to be bound to readline functions, sequences of other keys, or shell functions (but not combinations of the three types). It's relatively easy to do:
bind '"\C-e": end-of-line'
bind '"\C-j": accept-line'
bind '"\C-m": "\C-e | tee /tmp/last_command\C-j"'
The first two lines are just paranoia to make sure Ctrl-E and Ctrl-J are bound as expected (if you have a terminal without accept-line
bound, you don't have a terminal anymore...). The last line replaces Ctrl-M (aka "Return") which is usually bound to accept-line
, with the sequences:
- Ctrl-E (end-of-line)
| tee /tmp/last_command
(literal text)- Ctrl-J (accept-line)
This is visible on screen, and in the history, as the issued command itself is changed. The downside is this simple approach isn't "self-aware" and blindly alters every command, even one from history with the change already applied.
Here's a more complicated version which uses a function to modify commands as they are dispatched:
bind '"\C-e": end-of-line'
bind '"\C-j": accept-line'
bind -x '"\e\C-a": _recorder'
function _record() {
tee /tmp/this_command
[[ -f /tmp/this_command ]] && mv /tmp/this_command /tmp/last_command
}
function _recorder() {
local text="| _record"
[[ -z "${READLINE_LINE}" ]] && return 0
[[ "$READLINE_LINE" =~ "${text}"$ ]] || READLINE_LINE+=" $text"
return 0
}
bind '"\C-m": "\C-e\e\C-a\C-j"'
Step 2 is replaced by a bash function _recorder
, bound to Esc,Ctrl-a (which you don't need to type, it just needs to be bound to a keystroke because a macro can contain only keystokes).
This function can be arbitrarily smart, here it handles the output file race, and checks that the pipe isn't already present before altering the input line. You could also wrap the entire command line in a subshell if other redirections are present (though you'll quickly find parsing bash command lines in bash to be, ehm, complicated).
You could further(!) compound the hackery by fixing the history on the fly:
function myprompt() {
local _seq _cmdline
local text="| _record"
read _seq _cmdline < <(HISTTIMEFORMAT= history 1) # previous command
[[ "${_cmdline}" =~ (.*)" ${text}"$ ]] && {
_cmdline="${BASH_REMATCH[1]}"
history -d $_seq # delete entry
history -s "$_cmdline" # restore original
}
}
PROMPT_COMMAND=myprompt
The readline approach has a further non-trivial flaw: non-simple commands which can normally be entered over multiple lines are broken by this approach, e.g. (while
, for
).