Is it always a good idea to store time in UTC or is this the case where storing in local time is better?

I think that in order to answer that question, we should think about the benefits of using UTC to store timestamps.

I personally think that the main benefit to that is that the time is always (mostly) guaranteed to be consistent. In other words, whenever the timezone is changed, or DST applied, you don't get back or forth in time. This is especially useful in filesystems, logs and so on. But is it necessary in your application?

Think of two things. Firstly, about the time of DST clock shift. Is it likely that your events are going to occur between 2 AM and 3 AM (on the day the clock shift is done)? What should happen then?

Secondly, will the application be subject to actual timezone changes? In other words, are you going to fly with it from London to Warsaw, and change your computer timezone appropriately? What should happen in that case?

If you answered no to both of those questions, then you're better with the local time. It will make your application simpler. But if you answered yes at least once, then I think you should give it more thinking.


And that was all about the database. The other thing is the time format used internally by the application, and that should depend on what actually you will be doing with that time.

You mentioned it exposing the time via an API. Will the application query the database on every request? If you store the time internally as UTC, you will either need to do that or otherwise ensure that on DST/timezone change the cached times will be adjusted/pruned.

Will it do anything with the time itself? Like printing the event will occur in 8 hours or suspending itself for circa that time? If yes, then UTC will probably be better. Of course, you need to think of all the forementioned issues.


I like to think of it this way:

Computers don't care about time as a human-understandable representation. They don't care about time zones, date and time string formatting or any of that. Only humans care about how to interpret and represent time.

Let the database do what it's good at: storing time as a number--either a UNIX epoch (number of seconds elapsed since 1970-01-01) or a UTC timestamp (no timezone or daylight saving time information). Only concern yourself with representing time in a human-understandable way when you must. That means in your application logic, reporting system, console application or any other place a human will be viewing the data.


The following wouldn't apply for a truly multi-tenant global SaaS product, so this opinion is aimed at simple "Line of Business" app developers.

Storing as UTC is fine but there is one requirement that causes pain if you do this: "Can you write me a report that shows me how many of X that occur per day?"

If you store dates as UTC, this requirement will cause pain; you need to write timezone adjustment code on the application server and in your reporting; Every ad-hoc query you perform on data that includes date criteria will need to factor this in.

If you application meets the following criteria:

  1. Each instance is based in a single timezone.
  2. Timezone transitions are usually outside office hours or you don't really care about "durations" of things to the level that a missing hour or so will matter.

I suggest you store the datetime as local date time, whilst using a library that isolates you from server timezone config issues (e.g. Noda.Time in the world of .net).