How to install executables

To call a program by its name, shells search the directories in the $PATH environment variable. In Debian, the default $PATH for your user should include /home/YOUR-USER-NAME/bin (i.e. ~/bin).

First make sure the directory ~/bin exists or create it if it does not:

mkdir -p ~/bin

You can symlink binaries to that directory to make it available to the shell:

mkdir -p ~/bin
ln -s /home/user/Downloads/VSCode-linux-x64/Code ~/bin/vscode

That will allow you to run vscode on the command line or from a command launcher.

Note: You can also copy binaries to the $PATH directories but that can cause problems if they depend on relative paths.

In general, though, it's always preferable to properly install software using the means provided by the OS (apt-get, deb packages) or the build tools of a software project. This will ensure that dependent paths (like start scripts, man pages, configurations etc.) are set up correctly.

Update: Also reflecting Thomas Dickey's comments and Faheem Mitha's answer what I usually do for software that comes as a tarball with a top-level binary and expects to be run from there:

Put it in a sane location (in order of standards-compliance /opt, /usr/local or a folder in your home directory, e.g. ~/build) and create an executable script wrapper in a $PATH location (e.g. /usr/local/bin or ~/bin) that changes to that location and executes the binary:

#/bin/sh
cd "$HOME/build/directory"
exec ./top-level-binary "$@"

Since this emulates changing to that directory and executing the binary manually, it makes it easier to debug problems like non-existing relative paths.


According to TLDP, /opt might be a good place for this kind of software. I've used it myself to store some printer-related tools, and the "dynamic" version of Skype (as kba said, "terminal support" can then be achieved by setting the PATH variable accordingly).

More generally, I tend to use /opt to "install" proprietary software packaged as an executable, but that's probably just me. Besides, I tend to simply avoid this kind of software, since I usually have no certainty as to what it's going to do once I run it.

Another reason why I chose /opt is because it is usually meant for third-party, independent code, which does not rely on any file outside of its /opt/'package' directory (and other opt directories such as /etc/opt).

Under no circumstances are other package files to exist outside the /opt, /var/opt, and /etc/opt hierarchies except for those package files that must reside in specific locations within the filesystem tree in order to function properly. [...] Generally, all data required to support a package on a system must be present within /opt/'package', including files intended to be copied into /etc/opt/'package' and /var/opt/'package' as well as reserved directories in /opt.

One advantage of releasing source code is that people get to configure the compilation process, providing custom library/headers paths based on their system's specifics. When a developer decides to release code as an executable, that advantage is lost. IMHO, at this point, the developer is no longer allowed to assume that his/her program's dependencies will be available (which is why everything should be packaged alongside the executable).

Any package to be installed here must locate its static files (ie. extra fonts, clipart, database files) in a separate /opt/'package' or /opt/'provider' directory tree (similar to the way in which Windows will install new software to its own directory tree C:\Windows\Progam Files\"Program Name"), where 'package' is a name that describes the software package and 'provider' is the provider's LANANA registered name.

For more information, I would also suggest reading this other U&L question, which deals with the differences betwen /opt and /usr/local. I would personally avoid /usr/local in this case, especially if I'm not the one who built the program I'm installing.


It is entirely possible, and in fact quite easy, to create a distribution binary package from a binary zip archive or tarball, as in your example of Visual Studio Code.

Yes, Linux distribution binary packages like debs and rpms are customarily generated from source, but they don't have to be. And it is often (though not always) possible to arrange things to that the resulting distribution binary package installs things in the "right" places to conform to distribution policy.

In the case of a random proprietary tarball, if there was a way to properly install the software, e.g. an install target in a makefile, then that could be used with the distribution packaging machinery. Otherwise, this might involve "manually" mapping files to the "right" places, which could be a lot of work. While creating such a package might seem a weird thing to do, it would still have one of the major benefits of package management, namely clean installs and uninstalls. And of course such a package would never be accepted into any Linux distribution worth the name, but that's not your question.