Success Story on migrating a multi Terrabyte analytical Database to Postgres Friday 13:40 Baltic I
Website: www.parcit.de
I'm Postgres DBA at parcIT since June 2016 with about 10 years of experience in Postgres.
Yes to both. I talked about another migration project on the conference in Dublin and am a regular attendee for some years.
I'll talk about the reasons, why Postgres was chosen for a (estimated) 20TB database and the pitfalls that were hit during implementation. When i joined the team, they had several database crashes and system freezes per week.
I have never seen a Postgres database before, that was unstable and very surprised. People were in doubt, if Postgres is able to handle a few terabytes and were already considering a migration to a proprietary system. Finally, there were only a few more or less easy to avoid causes and after they were fixed, our downtimes are limited to upgrades every other month.
I'm going to introduce the hardware setup with 25TB of NVME storage and how we do backups even during high Write-Ahead Logging traffic. If we get a 70TB SAN online in time, i'll cover that as well. I'm going to show the most effective performance tuning settings that we did and last but not least, what we better had not neglected. We currently pay a multiple of hours to adjust the roles and permissions management afterwords.
I'd like to show, how we successfully handle terabytes of data and resolve doubts, that Postgres might not be capable to do so.
Beginners and Intermediates with interests in databases with high demands on read and write load. Decision-makers with doubts, if Postgres can compete proprietary database systems.
There is no knowledge required.