English speaking PostgreSQL public chat. This group is for discussions on PostgreSQL-related topics and strives to provide best-effort support as well.
Yes, I copied.archiving working well on secondary. But keeps wal on pg_wal.
Читать полностью…Hello,
Im encountering an issue with postgreSQL version 15 setup. The pg_wal directory on the secondary server is getting full, retaining WAL files from the past four months. However, on the primary server its fine, the pg_wal size is only 1.5GB, whereas the secondary has grown to 28GB.
I confirmed that the configuration parameters are identical on both servers, so Im unsure why this is happening only on the secondary.
Could you suggest anything on that?
While the statement may be a bit exaggerated — it's definitely true for those who can't find schema key in man pg_dump.
Читать полностью…Oh, come on, I'd love @tzirechnoy to defend that hill 😏
Читать полностью…Well, nothing that only exports one schema is a "backup" in the sense that it can be used to recover an entire cluster, so I assume that's not what the OP was looking for. "Backup" has both specific and general meanings. 😊
Читать полностью…pg_dump
can limit what it takes a backup of by schema. The option you want to look for is the -n
or --schema
option: https://www.postgresql.org/docs/current/app-pgdump.html
Well you can't create an FTS index on an OID column. Since that is simply a number referencing the large object.
I suspect you'll need to split your very large row with in your 'application'.
If the whole table is only 8GiB, and one column is 1GiB yet others are not. It would probably be sensible to ask why one value is significantly bigger than the others.
But it's hard to suggest anything else without knowing what type of data you have in that single value for it to be so large. Bioinformatics?
Let me explain what we are doing
We are migrating data from Oracle to PostgreSQL using the ora2pg tool. One of the tables on the Oracle side is approximately 8GB and contains BLOB objects. When attempting to import the .SQL file generated by ora2pg into PostgreSQL, I encountered the following error:
ERROR: invalid memory allocation request size 1073741834
I discovered that a single row in the table is larger than 1GB. To address this, we decided to use the lo_import method in ora2pg, which splits the BLOB data into manageable chunks. This approach requires using the OID data type instead of bytea.
We are now able to successfully import the data but facing issue in creating FTS_index on OID data type column (whereas in oracle it's a oracle Text index in blob column)
But that's not TEXT documents.
And it's generally insuitable to put FTS on images...
Maybe it's scanned documents including some OCR metadata in text form. Apparently OP ran into the issue, so he *has* documents with > 1 GB 🤷
Читать полностью…Absolutely, but let's just assume that it wasn't Jaisankar who initially designed this system... ;-)
Читать полностью…So, to your actual problem:
FTS is Full TEXT Search. I.e., it handles TEXT. TEXT is limited to 1GB in PostgreSQL, so if you really want to do FTS on documents with > 1GB, you'll have to deal with this yourself.
The most reasonable approach IMHO would be to split your documents in chunks of max 1GB and index those. If what you're searching for is not exactly around the 1GB boundary and you somehow connect your chunks logically, that will probably get you good results.
Another approach could be to create the ts_vector
column you want to index outside of PostgreSQL, i.e. on application level. The nature of those vectors is that they're far smaller than the actual documents, so space limitations shouldn't apply then.
What's the internal format of your blobs?
How that's even possible that a single piece of text became more than a gigabyte in size?
Do you mean that you have an OID column referencing a large object and you want to index the pointed-to large objects? I don't know of a way to do that.
Читать полностью…Hello,
How can I create FTS index for OID data types in postgres?
Did you copy over the postgresql.conf file and perhaps forgot to remove the archive_command and now it's trying to archive and failing continuously?
Читать полностью…Ah, c'mon! That's a bit weak... if ppl would start reading man pages, this channel would lose ~ half its traffic 🤪
Читать полностью…Good luck importing your logical backup in the middle of an emergency
Читать полностью…I guess he meant it is not a reliable backup that you can rely on, especially if you're having high workload and size on the database
Читать полностью…There is no such command.
The standard backup tools take backups of the entire database cluster.
Hi all, can anyone send me the command to take the back up of a particular schema?
Читать полностью…Well, I guess Jaisankar will have to provide some more details. Or he/she is happy already and will stick to this ovious antipattern... 🤷
Читать полностью…In fact, I doubt there could really be a single TEXT of 1GB size.
The full collected works of Lenin in 55 volumes is a kind of 70MB .doc.
Even considering a very good LZ compression ratio of 80% — there would be less than 350MB of text.
Considering that there are A LOT of images and formatting (tables, etc) — a real extracted text would be kind of 100MB or so.
And yet it is doubteful that this should be put in and database as a whole, not per-volume.
It seems that there just should be no coherent TEXT of that size — and it could not be read and understood by a person anyway.
Documents bigger than 1GB sounds like the issue lies somewhere else, though
Читать полностью…And here I stand, having dismissed the idea that we're discussing LOBs simply because "who'd do THAT???" 😏
Читать полностью…In the ora2pg migration, the equivalent data type for BLOB in PostgreSQL is BYTEA. However, when handling BLOB larger than 1GB, PostgreSQL encounters out-of-memory issues. To address this, we decided to use the lo_import method in ora2pg, which changes the data type to OID. The BLOB data is then uploaded and stored separately, with the table only referencing the object ID.
However, Full Text Search (FTS) indexes cannot be directly created on OID data types. How can this issue be resolved?
Why in the world would you want to do that?!? Why do you have OID columns to start with?
Читать полностью…Starting new PostgreSQL + AWS DBA v17 batch-42 (for those who are looking for career transition)
Ping me for more details (content and timings)