English speaking PostgreSQL public chat. This group is for discussions on PostgreSQL-related topics and strives to provide best-effort support as well.
I mean it's storage could pack the data by uri grouping rather effectively.
Also, some partition improvements in the timescale code may be good for you.
Ideally, with a great performance improvement with huge data, hash index seems good (seems)
Читать полностью…Just curious: index on uri is a btree, hash or other type? If not hash, can a hash index increase performance without using partitioning method?
Just curious 😅
yes thats client side error, but serves has plenty of memory
Читать полностью…You're aware that that's psql running into issues, right? shared_buffers don't matter...
Читать полностью…hi everyone, I have 5gb matherialized view, when quring via psql I get out of memory for query result, shared buffer 16gb
Читать полностью…Each function as a separate SQL file in Git, a bash script to build the schema update file.
Читать полностью…There's obviously some garbage in your source data, or you misconfigured either the export or import config. My guess is that at some place, a NULL
gets converted to a binary 0
and that's in turn sent as a text.
Yes exactly, do u know if I used schema-based sharding, certainly it will divide the database into different folders within the same data of the database across the workers nodes. Now, if I want to have an index for example to implement, will be still effective as the physical data is scattered so maybe a portion of your index data is found on worker node 1 and other on node 2, so it won't make sense, right?
Читать полностью…Is it possible to apply database sharding to postgreSQL? Having two cluster that working as a primary and divide the workload on both?
Читать полностью…Hi Everyone..I am doing migration by using the Ora2pg tool ..i am facing an issue and the error is as follows.
-------------------------------------------------------------
DBD::pg:: db pg_putcopyend failed :ERROR: invalid byte sequence for encoding "UTF8": 0x00vg : 11 recs/sec
Context: copy content, line 1 at /usr/local/share/perl5/Ora2pg.pm line 16849
FATAL : ERROR : invalid byte sequence for encoding "UTF8": 0x00
_-------_----------------------------------------------------
Do you have any commands for that, that will be useful?
Читать полностью…Doh! Seems that postgresql still prefer btree into equal comparison. Temporary remove the btree index (is it a composite or partial index?) and see what happens could a viable way without impacting your system?
Читать полностью…The fact is that no matter what I do postgresql seems to straight skip the hash indexes
Читать полностью…I've not a 2TB url table to test but googling I've found this article. I'm not sure can fit your data but generally urls are long.
https://evgeniydemin.medium.com/postgresql-indexes-hash-vs-b-tree-84b4f6aa6d61
I have a table full of webpage snapshots, with the following columns:
- crawl_id = unique id of the webpage snapshot
- uri = url of the webpage (not unique)
- taken_on = timestamp of the snapshot
99% of the lookups are done by exact matching of uri. The table is quite big (2TB) and i would like to partition it. Is hash really the only way here with a sensible number of partitions considering my memory?
Hello and welcome to this chat. Here we discuss PostgreSQL and provide best-effort voluntary support on Postgresql issues.
If you have any question, try asking kindly, including log files, error descriptions and perhaps what you're trying to do.
- 🚫 photos and screenshots
- 🚫 spam and offtopic
- ✅ questions with log files and clear explanation of what's gone wrong
⬇️ Press the button down here ⬇️
AFAIKT Citus gets told what to distribute how, incl. Indexes etc.
Anyway, if you already know you'll have data distributed in a bad way, maybe sharding is the wrong approach altogether?
Also, using partitioning and postgres_fdw seems to be a feasible option nowadays. As you need to spend time on thinking about how to split your data anyway, consider this as well.
How do you guys store/manage your procedures?
Using Git or something else?
RDS manages binary replication for you, and it sets up WAL archiving as part of it. You don't have any real control over the precise configuration of binary replication: you get what RDS sets up for you.
Читать полностью…