Go to content

Efficient Time Series with PostgreSQL - Steve Simpson

SQL databases aren't trendy anymore, but their general purpose nature can still be extremely useful for reducing complexity in your system architecture. Bespoke databases seem to crop up daily in the name of performance or functionality. This talk will examine the field of "time series" databases and look in depth as to how PostgreSQL can be used for the purpose, despite often being overlooked. Databases of this nature have seen an explosive resurgence in recent years, and are often employed in monitoring systems to collect system and application metrics, but also in the growing world of "IoT". The form of data stored by these systems is nothing to be afraid of - relational databases have been storing it for a long, long time. What does seem to be happening is a convergence on a data model and access pattern, leading to the emergence of more "out-of-the-box" solutions. An assertion of this talk is that for plenty of use cases, PostgreSQL is more than capable of storing all of this data, at considerable scale. Of course we are told to use the right tool for the job, but having to learn and operate a single tool instead of many, can be a huge operational advantage. We’ll get quite technical in this talk, take a look the data models and access patterns required, and how this can be fitted into the general purpose environment of PostgreSQL. Additionally, it is always constructive to look at what can be problematic, not just focus on the positives, and see why many turn to other bespoke solutions.

June 12, 2017