datetime fractional seconds
datetime
in Adaptive Server Enterprise (and SQL Server, since they share a common-code-base that included the datetime type) is stored using 8 bytes. 4 bytes for the date, and 4 bytes for the time. You can see this by looking at the binary version of a datetime:
SELECT CONVERT(varbinary(8), CONVERT(datetime, '1753-01-01T00:00:00.000'), 0)
SELECT CONVERT(varbinary(8), CONVERT(datetime, '1900-01-01T00:00:00.000'), 0)
SELECT CONVERT(varbinary(8), CONVERT(datetime, '1900-01-02T00:00:00.000'), 0)
SELECT CONVERT(varbinary(8), CONVERT(datetime, '3000-12-31T23:59:59.997'), 0)
The four binary values are:
0xFFFF2E4600000000 0x0000000000000000 0x0000000100000000 0x000622D3018B81FF
Take the following, which shows where the 1/300 of a second comes into play:
SELECT CONVERT(varbinary(8), CONVERT(datetime, '1900-01-01T00:00:00.000'), 0)
SELECT CONVERT(varbinary(8), CONVERT(datetime, '1900-01-02T00:00:00.000'), 0)
SELECT CONVERT(varbinary(8), CONVERT(datetime, '1900-01-02T00:00:00.003'), 0)
The difference between the 2nd and 3rd values, is one:
0x0000000000000000 0x0000000100000000 0x0000000100000001
So dates are stored in the most significant 4 bytes; and times are stored in the least significant 4 bytes. Although moving to a precision greater than 3 milliseconds (1/300 of a second) would be possible in 4 bytes of storage; that is all the precision that is actually used.
In SQL Server, you can use a datetime2(7)
data type to get precision down to 7 digits, with an accuracy of 100ns:
SELECT CONVERT(varbinary(16), CONVERT(datetime2(7), '1900-01-01T00:00:00.0000000'), 0)
SELECT CONVERT(varbinary(16), CONVERT(datetime2(7), '1900-01-01T00:00:00.0000001'), 0)
SELECT CONVERT(varbinary(16), CONVERT(datetime2(7), '1900-01-01T00:00:00.0000002'), 0)
The storage of these values are slightly different, however you can still see the binary value incrementing:
0x0700000000005B950A 0x0701000000005B950A 0x0702000000005B950A
I'm using the Sybase ISQL client; Sybase CTISQL Utility/15.7/P-EBF20996 SP100/DRV.15.7.0.10
As an aside, a small difference exists between how Adaptive Server Enterprise (ASE) and SQL Server round datetime
values. In SQL Server, milliseconds are rounded to 0.000
, 0.003
, and 0.007
, whereas in ASE they are rounded to 0.000
, 0.003
, and 0.006
- why there is a difference is not documented as far as I can tell. You can see this on ASE by running this query:
SELECT CONVERT(varchar(50), CONVERT(datetime, N'2017-01-01T23:59:59.997'), 139);
Which returns:
Jan 1 2017 23:59:59.996000
Whereas on SQL Server, the equivalent code, SELECT CONVERT(varchar(50), CONVERT(datetime, N'2017-01-01T23:59:59.997'), 109);
, returns:
Jan 1 2017 11:59:59:997PM
Based on this answer, it is clear that the exact encoding is opaque and counter-intuitive. It is also clear that the statement in Sybase documentation to the effect that datetime
can hold values accurate to 1/300 second
is misleading and imprecise.
What the documentation should have said is that datetime
has a resolution / precision / granularity (pick your favorite term) of approximately 1/300 of a second or approximately 3 ms.
I ran a test to corroborate this. On a table with approximately 1000 datetime
values (randomly generated as they depend on user input), I wanted to see how many distinct millisecond values I would encounter:
SELECT COUNT(*) FROM
(
SELECT DISTINCT y.millis FROM
(
SELECT RIGHT(RTRIM(LTRIM(x.submissionDate)), 6) AS millis FROM (
SELECT convert(char(32), submission_date, 139) AS submissionDate FROM someTable
) x
) y
) z
The innermost SQL statement in the query above was also used in my question and yields the values I 've posted in my question.
The result of the composite query was 237 which is consistent with the above.
Furthermore, SQL Server documentation (which is applicable as the datetime
implementation is part of the shared codebase between Sybase ASE and SQL Server), has the following to say:
Image follows:
In the same page it is also noted that:
datetime is not ANSI or ISO 8601 compliant.
Overall I think using such a hairy type is an anti-pattern. I cannot fathom why anyone would not simply use an INT
to hold seconds since the Epoch (yeah I know about 2038), or a BIGINT
to hold milliseconds or even microseconds or nanoseconds since the Epoch.
update following exhaustive testing
I was able to find some time to do exhaustive testing on this matter and managed to get to the bottom of it. Here's my findings (tested against Sybase ASE 15.7, I guess same results apply to SQL Server):
I 've proved experimentally that the server keeps Datetime
values supplied to it by the client with a resolution of 1/300 of a second. I created a table with the following DDL:
DROP TABLE DatetimeTest;
CREATE TABLE DatetimeTest (
d DATETIME NOT NULL
);
… and filled it (using a driver program in Java) with 1000 values with millisecond increments of 1/1000 of a second. Executing the following query:
SELECT COUNT(*) FROM
(
SELECT DISTINCT y.millis FROM
(
SELECT RIGHT(RTRIM(LTRIM(x.d)), 6) AS millis FROM (
SELECT convert(char(32), d, 139) AS d FROM DatetimeTest
) x
) y
) z
… yielded exactly 300
.
The above establishes that the server is able to hold only 300 distinct values for the fractional second part. This does not prove that the actual values kept are multiples of 1/300 of a second but it's a reasonable indication.
What the above implies is that any storage of fractional seconds supplied by the client to the server is potentially lossy. I therefore did some further tests to estabish exactly how lossy. What I found is the following:
- time values with second precision in the client can be stored losslessy in the server (all right, there was never any doubt about that)
- time values with tenth of second precision in the client can be stored losslessy in the server
- time values with hundredth of second precision in the client can be stored losslessy in the server
- only 10% of the time values with millisecond precision in the client can be stored losslessly in the server (those corresponding to full hundredths of a second). For the remaining 90% of the values a difference between what the client supplied and what the server stored up to (and including) 3ms can be observed.
The above are consistent with a hypothesis of the server storing the time component as an integer number of 1/300s of a second as:
- any tenth of a second value can be exactly expressed as a fraction in the form
n/300
- any hundredth of a second value can be exactly expressed as a fraction in the form
n/300
- only 10% of the possible millisecond values can be exactly expressed as a fraction in the form
n/300
.
Finally, it is clear that there's something wrong with the convert
function as the query:
SELECT convert(char(32), d, 139) AS d FROM DatetimeTest
… reports zeros beyond the third decimal point which is clearly not correct and confusing. But I don't believe it disproves the above.
update II
Weirdly, the percentage reported by my test code for non-exact server mappings for millisecond values is 90% only when I use in my tests the Sybase jConnect JDBC driver. For the open source JTDS JDBC driver an improved and wonderfully round 70% applies. Basically the test logic is that I create a value with millisecond precision in the client side, store it in the server, and then read it back again from the client to see what I get back. This test arrangement allows the client-side driver to gently correct some server values on the way back (when client-side code reads from the database).
Apparently the JTDS driver can more cleverly account for the idiosyncratic way in which the Sybase server stores the fractional second component and can do so for all 300 of the distinct server values (compensating for the rounding error on the reverse direction - from server to client - as necessary), not just for those 100 that happen to exactly correspond to some millisecond value in the 0-1000 range.
At any rate this doesn't change the facts on the server-side and is only relevant if one has control over the client-side language and driver that will be used to pull Datetime
values out of the server.
I thought I'd skip that part to keep things simple, but I then I thought I might add it for completeness. It also corroborates the model for server storage of Datetime
values as all of those numbers (300, 90%, 70%) fit the narrative. But life's too short, time to move on.