Maybe pg_basebackup should have too. But wal-g design of this feature is too crazy for being adapted by pg community as-is
Читать полностью…Also, the encoding must be the same as of the database, and should include the first row as headers also in pgadmin
Читать полностью…Maybe an index on B.date or composite of B.a_id, B.date might make the lateral approach fast.
Читать полностью…OP obviously was at the *very beginning* of his PG journey. As much as I'd *love* to give every newbie coming here a 4-day beginners course, *especially* on backup & recovery, as much do I not have the time to do so.
I do link my 50-min talk here on a regular basis (it seems), which I - to this day - conceive as a nice primer.
Let's just settle on that we won't overwhelm newbies with bold claims like "pg_dump is not a backup tool" in the future, eh?
i have used pg_dump -n many times and not once has it made my database unrecoverable.
Читать полностью…"how to do [very trivial stuff the don't understand]"
"do [direct simple answer]"
"aight the output looks fine sounds good 👍 (the database will be unrecoverable)"
Hello Gurus ..any expert willing to help setup Postgres patroni cluster , willing to pay for the expertise .Will need to show failiver,failback,recovery,ha etc ..please let know the total fees ...admins please remove this chat if this is not the place or if this is not alllowed
Читать полностью…Excel of all tools is not a reference for checking the sanity of a CSV...
I'm quite certain you'll have to adjust your quoting and/or delimiter settings (COPY
expects tabs, Excel expects semicolons (in COMMA separated values, oh the irony))
It means one of the rows has more columns than expected. Usually happens for example when you have unquoted text
For example this csv file:
id,sentence
1,Hello
2,I'm fine thanks
3,I like bananas, you?
hello, please can anyone help?
I tried import csv file into my PostgreSQL 14 server and it is giving me error
ERROR: extra data after last expected column
the problem is the join is based on the M-M relation, the index for the FK is there, but the date to sort it is in the B table
Читать полностью…Would have thought lateral would be the way to go. Is there an index on the column determining the 'first' row?
Читать полностью…Uh well, these days the optimizer will produce the same plan for a CTE and a classic JOIN anyway.
*Do* you have an index on that a_id column?
the CTE is a current improvement, the 'in use' version doesn't have it, it's just a left join, I tried the lateral join and the query time increase
Читать полностью…But you are VP, Chief Database Scientist @ EnterpriseDB, PostgreSQL Major Contributor and Committer
Читать полностью…TBTH, a classic JOIN (without the CTE) will probably do better. I.e., assuming you have an index on a_id
.
If you need those line numbers, you'd have to improve you OVER()
part, of course.
You may also want to look into JOIN LATERAL
, depending on how many records you pull out of that JOIN, that might be what you're looking for.
I don't think that my answer is unhelpful.
It's just plain my recomendation on most pg setups: use physical backup tools to backup.
No separate bases/schemas/tables. Just a cluster as a whole. Don't mess with that (without exceptional conditions). If you designed your workflow on schema backup — forget this, there is no schema BACKUP. Design something else. Your attempt was wrong, just make this another way.