2830
English speaking PostgreSQL public chat. This group is for discussions on PostgreSQL-related topics and strives to provide best-effort support as well.
Am i the only one that cannot get postgresql to use any brin index, ever?
Читать полностью…
can someone help me with Pgbackrest?
im taking a full backup and then insert some more data
then i remove the whole data directory and restore - the issue it that the restore also apply all the WAL's created after the backup.
I need python developer support.will pay for support
Please dm me
Data doesn't come automatically though and your statement is invalid at smaller data lengths.
Читать полностью…
So it's not automatic but triggered by data size.
Читать полностью…
Trick question. No types in PostgreSQL automatically use TOAST.
Читать полностью…
https://www.postgresql.org/docs/current/upgrading.html
Читать полностью…
In ms sql server we have tool called dma for validating braking changes so like that is there any tools in postgres to understand what are things may chance to break ?
Читать полностью…
Hi QWxp. I'm still using apgDiff since 10 years ago.
* dump production & dev ddl
* diff it with apgDiff
* roll the diff into test env ( one mirror containing full production data )
* if it pass, deploy.
* commit & tag the diff within a release.
* if something happen. rollback from that tag.
automate the process by using simple bash script. no complex ci/cd setup.
Hi, I wants to try bloom index
If anyone already tried can you pls brief how it helped practically to access early
yeah, sometimes i do miss solving complex problem by myself. now i prefer trade it to get higher leverage (sales, business network, etc).
Читать полностью…
Btw, over the past three years, I’ve been outsourcing almost all my database problems to major LLMs — complex stuff including CTEs, window, recursive, even regex.
I just attach DDL, prompt, and done. I barely write complex queries myself anymore. Kinda weird, but, I get to watch more Lex Fridman podcasts now. LOL.
Do you guys do the same?
Arman , try to write it down on your own language, than translate it to English. You may be speaking while your computer writes it down, and it is not working this way.
Читать полностью…
Maybe it's just a newly developed language variant, that we, dinosauros, don't understand?
Читать полностью…
Looking for postgresql trainer anyone there pls ping me
Читать полностью…
I want to connect my on prem postgres to adf but i can't use self hosted integration runtime and or any azure services, i have aleady have a vpn (office vpn) also this my main db, so there will be frequent read and write, so i want a prefect solution. Please
Читать полностью…
😄 Any VARLENA type (like text, jsonb, or bytea) larger than TOAST_TUPLE_THRESHOLD / ~2 KB will automatically TOASTed. Try it by inserting a text value bigger than 2 KB.
then check toast table:SELECT
c.relname AS main_table,
t.relname AS toast_table
FROM pg_class c
LEFT JOIN pg_class t ON c.reltoastrelid = t.oid
WHERE c.relname = 'your_sandbox_table';
Read all release notes for all major releases in between, of course.
Also, make a lot of tests.
Read every single major changelog between 12 and 17, and see if there are any particular notes about deprecated functions you might be still using or special upgrade steps
Читать полностью…
We are planning to upgrade postgres’s 12 to 17 , could you please let us know is there any pre-migration steps need to follows?
Читать полностью…
Bloom indexes have a very distinct use case. Many columns that are sparsely filled and get queried in unpredictable combinations. If your data is like that, you'll probably benefit. If it's not, bloom is probably not worth trying.
Читать полностью…
Hi, try to look at citus extension, it can actually help you, but you need to do research by yourself
Читать полностью…
Nope, SQL is not complex and I would do anything to avoid watching Lex Fridman podcast.
Читать полностью…
Patiently waiting durov and musk to drop grok into telegram. So we can ditch this crap and have real productive talks.😅
Читать полностью…
Your english is insanely non-sensial, please use another translation method from your native language for understandability.
Читать полностью…
Sounds very much like trying to optimise something far to early.
For transaction reporting, I would have thought building the report from the *actual* transaction data is the most important. For auditability you want the least complexity here and definitely don't want duplicated data.
If you're so concerned by the 'size' of your main transaction record, then you're likely better off splitting that.
But it's very hard for people here to give any actual advice if you don't at least share table structures (via a pastebin site).