2830
English speaking PostgreSQL public chat. This group is for discussions on PostgreSQL-related topics and strives to provide best-effort support as well.
I have patroni setup with 3 nodes, master sync async, patroni conf slot is true and pg has 2 phisycal rep slots, replicas seems working well, problem is vacuum not removing dead tuples because rep slots keeps xmin far behind from current, what can I do, how to solve this xmin problem
Читать полностью…
It's very good you have other consistency support methods beyound database. I'm always having to do some levels of recovery when I'm going to implement some busyness-critical apps.
I just like the relational database to be one of those levels, not a simple throw-out unreliable storage!
Most dreaded data loss scenario is actually an edge case where you cancel update/insert/delete statement waiting on syncrep before failover. We made application queries re-run friendly and after autofailover we would just stream last 5 minutes of events all over again (since it was an oltp with no queries greater than 5min allowed.) making data loss impossible.
https://github.com/patroni/patroni/pull/1414/files
pg_dump reads your database logically and converts the data, so it will be "slow"
Читать полностью…
https://b-peng.blogspot.com/2022/03/pgpool-debian.html?m=1
Читать полностью…
Use pg_backup_start to freeze your data folder and then use scp or rsync or cp or whatever to clone your data folder as fast as possible
does anyone have any advice on exporting large (not huge) PostgreSQL databases in Python with Subprocess? I am doing this sort of thing:
dump_process = subprocess.Popen(
dump_command, # /usr/bin/pg_dump etc...
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
env={"PGPASSWORD": "mypw"}
)
dump_result = dump_process.wait(timeout=300)
Having you any document for implementation and any blog related
Читать полностью…
My company also suggest me but I am not sure to auto failover in pg pool support or not?
Читать полностью…
Anyone going to PG Orlando?
Anyone have a Promo Code?
Hello, this kind of support warrants Amazon's action. We can support you only on open source Postgresql®
Читать полностью…
Finally got around implementing this awesome query (also thanks for making me understand lateral joins) (I know its been a long time 🙃)
Thanks once again to
- @Intrbiz for mentioning lateral joins
- @unfoxo for not exists (which is so good i can't stop using it)
- @dear_tomato for the query
Objection. I'm just fine with the other transaction modes (especially when the programmer understands consequences)!
Читать полностью…
For the record: Ilya has pretty distinct opinions on basically everything that is not a bare-metal server running no HA and does every transaction in SERIALIZABLE mode.
Others, including myself, have made good experiences with Patroni. Those included putting Patroni on top of an already running replicated PG cluster with no downtime.
If the user if fine to throw out some transactions or lay down several hours while DBAs are getting along the patroni... Then yes, it might be kind of a good... But why do they want autofailover then?
Читать полностью…
Depends on the definition of good. Good from DBA perspective or End user perspective. 3-node patroni HA cluster is great from user perspective but not so good from dba ease of setup/maintenance perspective.
Читать полностью…
copying from the hdd goes as fast as the storage obviously
Читать полностью…
are you saying the data directory is the bottleneck?
Читать полностью…
You can use pg_lscluster -j to get json info on every database you're running
Just as i was writing the same exact thing yesterday, haha
Читать полностью…
Hi everyone I want setup load balancer with auto failover can you suggest which tool is good if any document pls guide me in postgresql version 14
Читать полностью…
Most likely you need support from AWS. But "I need support" is a very broad request, you need to be more specific.
Читать полностью…
Hello All
(currently we are doing migration db2 to PostgreSQL using AWS SCT)
I need support, Could you please help me on this