2830
English speaking PostgreSQL public chat. This group is for discussions on PostgreSQL-related topics and strives to provide best-effort support as well.
If they are equal, the planner will match partition-by-partition, and with indexes this will generally be very fast. But imagine the following tables partitioned with the following ranges:
t1 t2
0 0
100
200
250
300
400
500 500
(p(t1)+p(t2))! joins*
You mean range partitioning should be done exactly in the same way for both the tables ?
Читать полностью…
You have a partitioned table, and the partition key is not being used in your filter. That's the reason why you have so many Seq Scans in the lower level. Those scans represent 34,342.741 of the total cost of 37,771.858. You may try some approaches to optimize it.
The client application may run your query in a Loop, including the partition key in the filter on each iteration. The results shall be accumulated in a result list, on client side.
@unfoxo , plan for the impacted query is here
https://explain.depesz.com/s/mUPT (It's typically during the time when query gets suceeded within expected latency)
Oh damn. My bad. I'm not using any AWS services. :D
Читать полностью…
before we go too far. what kind of storage does your on prem use ? ssd, nfs, raid ?
Читать полностью…
But no other go except to run with that. Not too many users. So not worthy for even more additional compute
Читать полностью…
Yeah that is a virtual machine, from AWS, additionally. A phone is probably gonna be more powerful than that
Читать полностью…
Well, you moved from 8 parallel workers to 2 parallel workers. Is the self managed server a virtual machine?
Читать полностью…
live session of Postgres [vibe] hacking happening now -- join! https://www.youtube.com/watch?v=6cZvHjDrmlQ&123
+ we created a telegram group to discuss our (and your!) postgres hacking activities: /channel/postgres_hacking
Thankyou so much i will check and get back to you
Читать полностью…
Yeah that's the reason of having 1900+ parallel seq scans on to the screenshot partition. That table has been range partitioned with id column only
Читать полностью…
"screenshot_capture"."screenshot_id" = "screenshot"."id"
Make sure both screenshot_capture and screenshot are partitioned in the same exact way (same ids), even having slightly different ranges will prevent postgresql from optimizing that search
Have you got index in every ID column( extra.id, screenshot.id.. ? And an index with ID and partition key in every partition table ?
Читать полностью…
Noone knows yet.
Performance tuning is about thorough checking execution plans, timings, perf sometimes — generally, about problem isolation, algorithmical optimisation and numerical stress test results.
There is no room for random parameter guessing by strangers here.
I can talk only based on my own experience. Setting max_parallel_workers to equal or higher than CPU cores will allow PostgreSQL to use all available CPU when needed. A good practice may be you leave some resources to the OS, or else oom killer may be triggered and shut down your cluster.
Читать полностью…
Anyway, try to drop your caches, then run the query cold and warm (aka fast and slow) using EXPLAIN ANALYZE with BUFFERS, and share both plans on https://explain.dalibo.com so we can look further into what's slowing down between the runs
Читать полностью…
I assume RDS runs bare metal or containerized, while you have the extra performance drop of virtualization
Читать полностью…
Exactly identical machines with 2 cpu with 8G ram
Читать полностью…
Hey hi, We have recently migrated postgresql 15 in RDS to self managed postgres 16. But query performance dropped down. Query X runs within expected latency for 1 or 2 times but for subsequent times, we could see it is taking more time. It is just 2 core cpu and 8g memory.
There were few parameter difference we noticed related to max_worker_processes , max_parallel_workers and max_parallel_workers_per_gather and shared_buffers and effective_cache_size. Do these configs contribute to performance drop and make any significance?
https://paste.laravel.io/a5cdd6fc-d58f-49e7-b5ab-596d1ea9a9a1
Is it even reasonable to set max_parallel_workers more than CPU cores?
from docs:When rounding values, the numeric type rounds ties away from zero, while (on most machines) the real and double precision types round ties to the nearest even number
so when rounding number:
* 2.5 will be 3 with numeric (num)
* and become 2 with double precision (dp)
with dp, arithmetic operation would possibly introduce buggy code.
because 0.5 will also become 1 with num. and 0 with dp. 😅
means a lot when calculate everything related to money.
Actually, the documentation is pretty good: https://www.postgresql.org/docs/
If you prefer videos, there are many available on YouTube and several members of this channel also wrote books on it.
Yes, it is possible, we all started at some point.
Yeh, Stripe does stuff like pass them as integers at cent / pence level.
Читать полностью…