Differences between inactive vs disabled and active vs enabled services

The man page for systemd has the info that you're looking for.

excerpt

systemd provides a dependency system between various entities called "units". Units encapsulate various objects that are relevant for system boot-up and maintenance. The majority of units are configured in unit configuration files, whose syntax and basic set of options is described in systemd.unit(5), however some are created automatically from other configuration or dynamically from system state.

Units may be 'active' (meaning started, bound, plugged in, ... depending on the unit type, see below), or 'inactive' (meaning stopped, unbound, unplugged, ...), as well as in the process of being activated or deactivated, i.e. between the two states (these states are called 'activating', 'deactivating').

A special 'failed' state is available as well which is very similar to 'inactive' and is entered when the service failed in some way (process returned error code on exit, or crashed, or an operation timed out). If this state is entered the cause will be logged, for later reference.

Note that the various unit types may have a number of additional substates, which are mapped to the five generalized unit states described here.

Breakdown

So if you've read the above and don't really understand the difference, here it is, in a nutshell.

  • enabled - a service (unit) is configured to start when the system boots
  • disabled - a service (unit) is configured to not start when the system boots
  • active - a service (unit) is currently running.
  • inactive - a service (unit) is currently not running, but may get started, i.e. become active, if something attempts to make use of the service.

inactive

This last one can seem like the most perplexing, but think of systemd along the same lines as xinetd. It can manage your services for you and start them up, on demand when needed. So while the services are "off" they're in the inactive state, but when started, they can become active.

This state can also occur when a service (unit) has been enabled but not yet manually started. So the service lays "dormant" in the stopped or failed state until either the service is manually started, or the system goes through a reboot, which would cause the service to become active due to its enablement.


When you enable/disable a service, you essentially tell whether to automatically start at boot.

So, systemctl enable lxdm will set LXDM as the DM. However, it will not start it right away.

On the other hand, active/inactive( and optionally failed) tells you the current state of the the service. After running systemctl start lxdm, LXDM actually runs, and its status is active.

Normally, when you first install a service, you would first test it by starting it. If it checks out, you would then enable it. That way, you avoid hanging your system during boot.