From decimal to timestamp with MySQL

When working with timestamps, one question that often arises is the precision of those timestamps. Most software is good enough with a precision up to the second, and that's easy. But in some cases, like working on metering, a finer precision is required.

I don't know exactly why¹, and it makes me suffer every day, but OpenStack is really tied to MySQL (and its clones). It hurts because MySQL is a very poor solution if you want to leverage your database to actually solve problems. But that's how life is, unfair. And in the context of the projects I work on, that boils down to that we can't afford to not support MySQL.

So here we are, needing to work with MySQL and at the same time requiring timestamp with a finer precision than just seconds. And guess what: MySQL did not support that until 2011.

No microseconds in MySQL? No problem: DECIMAL!

MySQL 5.6.4 (released in 2011), a beta version of MySQL 5.6 (hello MySQL, ever heard of Semantic Versioning?), brought microsecond precision to timestamps. But the first stable version supporting that, MySQL 5.6.10, was only released in 2013. So for a long time, there was a problem without any solution.

The obvious workaround, in this case, is to reassess your choices in technologies, discover that PostgreSQL supports microsecond precision for at least a decade and problem solved.

This is not what happened in our case, and in order to support MySQL, one had to find a workaround. And so did they in our Ceilometer project, using a DECIMAL type instead of DATETIME.

The DECIMAL type takes 2 arguments: the total number of digits you need to store, and how many in that total will be used for the fractional part. Knowing that the internal storage of MySQL uses 1 byte for 2 digits, 2 bytes for 4 digits, 3 bytes for 6 digits and 4 bytes for 9 digits, and that each part is stored independently, in order to maximize your storage space, you want to pick a number of digits that fits that correctly.

This is why Ceilometer picked 14 for the integer part (9 digits on 4 bytes and 5 digits on 3 bytes) and 6 for the decimal part (3 bytes).

Wait. It's stupid because:

  • DECIMAL(20, 6) implies that you uses 14 digits for the integer part, which using epoch as a reference makes you able to encode timestamp (10^14) - 1 which is year 3170843. I am certain Ceilometer won't last that far.
  • 14 digits is 9 + 5 digits in MySQL which is 7 bytes, the same size that is used for 9 + 6 digits. So if you could have DECIMAL(21, 6) for the same storage space (and go up to year 31690708 which is a nice bonus, right?)

Well, I guess the original author of the patch did not read the documentation entirely (DECIMAL(20, 6) being on the MySQL documentation page as an example, I imagine it just has been copy-pasted blindly?).

The best choice for this use case would have been DECIMAL(17, 6) which would allow storing 11 digits for integer (5 bytes), supporting timestamp up to (2^11)-1 (year 5138), and 6 digits for decimal part (3 bytes), using only 8 bytes in total per timestamp.

Nonetheless, this workaround has been implemented using a SQLAlchemy custom type and works as expected:

class PreciseTimestamp(sqlalchemy.types.TypeDecorator):
"""Represents a timestamp precise to the microsecond."""
 
impl = sqlalchemy.DateTime
 
def load_dialect_impl(self, dialect):
if dialect.name == 'mysql':
return sqlalchemy.dialect.type_descriptor(
sqlalchemy.types.DECIMAL(precision=20,
scale=6,
asdecimal=True))
return sqlalchemy.dialect.type_descriptor(self.impl)


Microseconds in MySQL? Damn, migration!

As I said, MySQL 5.6.4 brought microseconds precision to the table (pun intended). Therefore, it's a great time to migrate away from this hackish format to the brand new one.

First, be aware that the default DATETIME type has no microseconds precision: you have to specify how many digits you want as an argument. To support microseconds, you should therefore use DATETIME(6).

If we were using a great RDBMS, let's say, hum, PostgreSQL, we could do that very easily, see:

postgres=# CREATE TABLE foo (mytime decimal);
CREATE TABLE
postgres=# \d foo
Table "public.foo"
Column Type Modifiers
────────┼─────────┼───────────
mytime numeric
postgres=# INSERT INTO foo (mytime) VALUES (1473254401.234);
INSERT 0 1
postgres=# ALTER TABLE foo ALTER COLUMN mytime SET DATA TYPE timestamp with time zone USING to_timestamp(mytime);
ALTER TABLE
postgres=# \d foo
Table "public.foo"
Column Type Modifiers
────────┼──────────────────────────┼───────────
mytime timestamp with time zone
 
postgres=# select * from foo;
mytime
────────────────────────────
2016-09-07 13:20:01.234+00
(1 row)


And since this is a pretty common use case, it's even an example in the PostgreSQL documentation. The version from the documentation uses a calculation based on epoch, whereas my example here leverages the to_timestamp() function. That's my personal touch.

Obviously, doing this conversion in a single line is not possible with MySQL: it does not implement the USING keyword on ALTER TABLE … ALTER COLUMN. So what's the solution gonna be? Well, it's a 4 steps job:

  1. Create a new column of type DATETIME(6)
  2. Copy data from the old column to the new column, converting them to the new format
  3. Delete the old column
  4. Rename the new column to the old column name.

But I know what you're thinking: there are 4 steps, but that's not a problem, we'll just use a transaction and embed these operations inside.

MySQL does not support transactions on data definition language (DDL). So if any of those steps fails, you'll be unable rollback steps 1, 3 and 4. Who knew that using MySQL was like living on the edge, right?

Doing this in Python with our friend Alembic

I like Alembic. It's a Python library based on SQLAlchemy that handles schema migration for your favorite RDBMS.

Once you created a new alembic migration script using alembic revision, it's time to edit it and write something along those lines:

from alembic import op
import sqlalchemy as sa
from sqlalchemy.sql import func
 
class Timestamp(sa.types.TypeDecorator):
"""Represents a timestamp precise to the microsecond."""
 
impl = sqlalchemy.DateTime
 
def load_dialect_impl(self, dialect):
if dialect.name == 'mysql':
return dialect.type_descriptor(mysql.DATETIME(fsp=6))
return self.impl
 
def upgrade():
bind = op.get_bind()
if bind and bind.engine.name == "mysql":
existing_type = sa.types.DECIMAL(
precision=20, scale=6, asdecimal=True)
existing_col = sa.Column("mytime", existing_type, nullable=False)
temp_col = sa.Column("mytime_ts", Timestamp(), nullable=False)
# Step 1: ALTER TABLE mytable ADD COLUMN mytime_ts DATETIME(6)
op.add_column("mytable", temp_col)
t = sa.sql.table("mytable", existing_col, temp_col)
# Step 2: UPDATE mytable SET mytime_ts=from_unixtime(mytime)
op.execute(t.update().values(mytime_ts=func.from_unixtime(existing_col)}))
# Step 3: ALTER TABLE mytable DROP COLUMN mytime
op.drop_column("mytable", "mytime")
# Step 4: ALTER TABLE mytable CHANGE mytime_ts mytime
# Note: MySQL needs to have all the old/new information to just rename a column…
op.alter_column("mytable",
"mytime_ts",
nullable=False,
type_=Timestamp(),
existing_nullable=False,
existing_type=existing_type,
new_column_name="mytime")


In MySQL, the function to convert a float to a UNIX timestamp is from_unixtime(), so the script leverages it to convert the data. As said, you'll notice we don't bother using any kind of transaction, so if anything goes wrong, there's no rollback, and it won't be possible to re-run the migration without a manual intervention.

TimestampUTC is a custom class that implements sqlalchemy.DateTime using a DATETIME(6) type for MySQL, and a regular sqlalchemy.DateTime type for other back-ends. It is used by the rest of the code (e.g. ORM model) but I've pasted it in this example for a better understanding.

Once written, you can easily test your migration using pifpaf to run a temporary database:

$ pifpaf run mysql $SHELL
$ alembic -c alembic/alembic.ini upgrade 1c98ac614015 # upgrade to the initial revision
$ mysql -S $PIFPAF_MYSQL_SOCKET pifpaf
mysql> INSERT INTO mytable (mytime) VALUES (1325419200.213000);
Query OK, 1 row affected (0.00 sec)
 
mysql> SELECT * FROM mytable;
+-------------------+
| mytime |
+-------------------+
| 1325419200.213000 |
+-------------------+
1 row in set (0.00 sec)
 
$ alembic -c alembic/alembic.ini upgrade head
 
$ mysql -S $PIFPAF_MYSQL_SOCKET pifpaf
mysql> SELECT * FROM mytable;
+----------------------------+
| mytime |
+----------------------------+
| 2012-01-01 13:00:00.213000 |
+----------------------------+
1 row in set (0.00 sec)


And voilà, we just migrated unsafely our data to a new fancy format. Thank you Alembic for solving a problem we would not have without MySQL. 😊