📚 eBook: Tuning Autovacuum for best Postgres performance Did you know we launched a new eBook on autovacuum last week? Learn the most important concepts of why and when #Postgres has to vacuum, why that matters, and when it’s best to tune the default settings that control autovacuum scheduling, vacuum overhead, and more. https://lnkd.in/gGMXm69c
pganalyze
Software Development
San Francisco, California 1,167 followers
Deep insights into Postgres. Other monitoring tools show you what happened. pganalyze tells you why.
About us
Postgres performance at any scale. Deliver consistent database performance and availability through intelligent tuning advisors and continuous database profiling.
- Website
-
https://pganalyze.com/
External link for pganalyze
- Industry
- Software Development
- Company size
- 2-10 employees
- Headquarters
- San Francisco, California
- Type
- Privately Held
- Founded
- 2012
- Specialties
- PostgreSQL and Database Monitoring
Locations
-
Primary
San Francisco, California 94116, US
Employees at pganalyze
Updates
-
Full episode here: https://lnkd.in/gtvdCAeJ In 5mins of Postgres E83, we explain how the BUFFERS option for EXPLAIN ANALYZE works, and why the shared hits counter is hard to interpret in Nested Loops.
-
Full episode here: https://lnkd.in/dszJ5KR3 In 5mins of Postgres E78, we look at increased overhead created by query planning time and have a look at how ChartMogul improved performance by changing from list partitioning to hash.
-
Full episode here: https://lnkd.in/d_h46BYb In 5mins of Postgres E75, we are looking at an edge case in Postgres where using IN is faster than using ANY in single-element lists.
-
Full episode here: https://lnkd.in/dHBdYBTE In 5mins of Postgres E71, we look at how Figma implemented partitioning tables between servers and how the team at Notion is sharding their data horizontally.
-
Full episode here: https://lnkd.in/dUTmhE5k In 5mins of Postgres E76, we optimize subqueries by understanding the Postgres planner better. We show correlated vs. uncorrelated subqueries, as well as scalar subqueries vs. tabular subqueries.
-
We're getting ready for our webinar, starting in about 20mins. Come join us here: Optimizing slow queries with EXPLAIN to fix bad query plans https://lnkd.in/dbXpBXEM
Welcome! You are invited to join a webinar: Optimizing slow queries with EXPLAIN to fix bad query plans. After registering, you will receive a confirmation email about joining the webinar.
us02web.zoom.us
-
pganalyze reposted this
In today's 5mins of #Postgres we discuss tuning the "work_mem" setting in Postgres for a given workload - and why its not as simple to set as you might think (or as it should be!). The most important aspect that I had to re-discover when researching this episode: The work_mem setting in Postgres is effective on a per-query execution node (aka plan node) basis, not for the whole query. That means for a query that e.g. does multiple sort or hashing operations, you will see a multiple of work_mem being used. In the episode we showcase that with an example, using pg_log_backend_memory_contexts(..) to dump the memory usage of a given connection. We feature a post by Shaun Thomas on the Tembo blog last week, as well as a few mailing list discussions, and a good post from Christophe Pettus last year. https://lnkd.in/gkqApSX6
The surprising logic of the Postgres work_mem setting, and how to tune it
pganalyze.com
-
How to optimize slow queries with EXPLAIN to fix bad query plans: Make sure to join us for next week's webinar! In this webinar on June 18th, 9:30am PT, you will get an introduction on using EXPLAIN effectively to optimize slow queries in Postgres, and how to identify bad query plans, due to issues such as Postgres planner row mis-estimates. The focus of this webinar is to teach the essential skills of optimizing slow queries with EXPLAIN ANALYZE, teach you about anti-patterns that you will encounter when debugging queries, and leave you with practical advice on how to address common problems. Register here: https://lnkd.in/dpTdAkXh
Welcome! You are invited to join a webinar: Optimizing slow queries with EXPLAIN to fix bad query plans. After registering, you will receive a confirmation email about joining the webinar.
us02web.zoom.us
-
In today's 5mins of #Postgres, we talk about an improvement to the Postgres query planner in the upcoming Postgres 17, when it comes to materialized CTEs, and column statistics as well as sort order available to the upper plan level. Materialized CTEs are actually less likely today - if you recall Postgres 12 changed the default so a simple CTE like "WITH cte_table AS ( ... ) SELECT ... FROM cte_table" will typically pull up the query inside the "cte_table" to the top level, as if it was written as a sub-SELECT. But if you either use the MATERIALIZED keyword explicitly (because you want the planner to treat it as an optimization fence), or if Postgres can't find a way to pull up the CTE, you will see an explicit "CTE Scan" in your query plan. If you do, you might be surprised about the limited information that Postgres previously provided to plan nodes consuming that CTE Scan. Specifically, only row counts and width, but no other information like column statistics or previous sort orders were passed along. This is fixed in Postgres 17 thanks to the work of Jian Guo, Tom Lane and Richard Guo. Watch the full episode, or read the transcript for all the details, including example query plans that will change: https://lnkd.in/dsAJ7mMq
Waiting for Postgres 17: Better Query Plans for Materialized CTE Scans
pganalyze.com