Accidentally used the output redirection > instead of a pipe |
To prevent existing files from being overwritten by redirection >
use the noclobber
option in bash
or any POSIX-like shell (also in (t)csh
where the feature actually originated, though you do set noclobber
instead of set -o noclobber
/set -C
there). Then, if you need to force to replace a file, use the >|
redirection operator (>!
in (t)csh
).
Example:
$ echo abc > file
$ set -o noclobber
$ echo xyz > file
bash: file: cannot overwrite existing file
$ echo xyz >| file
$ cat file
xyz
BTW, you can check the current settings with set -o
:
$ set -o
...
monitor on
noclobber on
noexec off
...
Sadly I suspect you'll need to rewrite it. (If you have backups, this is the time to get them out. If not, I would strongly recommend you set up a backup regime for the future. Lots of options available, but off topic for this answer.)
I find that putting executables in a separate directory, and adding that directory to the PATH
is helpful. This way I don't need to reference the executables by explicit path. My preferred programs directory for personal (private) scripts is "$HOME"/bin
and it can be added to the program search path with PATH="$HOME/bin:$PATH"
. Typically this would be added to the shell startup scripts .bash_profile
and/or .bashrc
.
Finally, there's nothing stopping you removing write permission for yourself on all executable programs:
touch some_executable.py
chmod a+x,a-w some_executable.py # chmod 555, if you prefer
ls -l some_executable.py
-r-xr-xr-x+ 1 roaima roaima 0 Jun 25 18:33 some_executable.py
echo "The hunting of the Snark" > ./some_executable.py
-bash: ./some_executable.py: Permission denied
I strongly advise to have the important scripts under a git repo, synced remotely (a fancy self-hosted platform will do), as @casey's comment says.
This way you are protected from bad human mistakes like reverting the file to the previous working state and execute it again.