English speaking PostgreSQL public chat. This group is for discussions on PostgreSQL-related topics and strives to provide best-effort support as well.
I've encountered something that's not being optimized and i'm wondering why. I used PARTITION BY RANGE (sub_id);
on a table, then created 50 different partition, spanning one million rows each. (from 0 to 1M, from 1M to 2M and so on...)
You can encrypt as much as you want but while the database is running you still have to make sure your surface attack area is minimal
Читать полностью…That is assuming you are the owner of your own infrastructure. The idea of confidential containers/VMs is that you can host in the cloud but still have total confidence that your hosting/bundespolizei will not be able to see your data
Читать полностью…To get around a LUKS mount, you need to be root. Or postgres. Both give you access to the DB.
IMNSHO the whole TDE/encryption at rest thing is good for one - and only one - situation: someone manages to steal you hard drive (or you dispose it incorrectly). LUKS will cover that for you, no TDE necessary. Probably even the built-in encryption that binds your HDD/SSD to your controller will suffice...
It will almost certainly be much easier to just hack the broken J2EE or PHP app in front to carry out all your data. Because "only a database with full permissions to the app user is supported" and other calamities in our industry.
If you go that far, yes. Otherwise it is just a regular container.
Читать полностью…You should be able to use SEV and have the container "boot" from a secure signed image
Читать полностью…What postgres version it is? Latest version has some improvement regarding low level lwl locks.
Читать полностью…Have you considered caching that table in somewhere like Valkey since it's small? Optimizer will prefer seq scan if your queries are fetching large fraction of a data from that table, that's just logical so I would also look into business logic to see if it can be improved.
Читать полностью…Hi All
Can any one help me in configuring a Postgres database backup configuration in standbys using pgbackrest.
That's essentially what I said.
If you want to encrypt the data in your database, encrypt it somewhere outside. E2E, ideally.
Still, all the trust now shifts to you being able to properly secure your database and accompanying services
Читать полностью…Anyone who can access the database, can read the data, even with TDE. It all depends on what attack vectors you wish to guard against.
Читать полностью…You're basically shielded by the TEE, even the hypervisor cannot see what's inside
Читать полностью…How does that block someone from entering the container?
Читать полностью…Still anyone with access to the OS - or the container, or the jail - can read the data.
Читать полностью…I wonder if you can just create a jail with postgresql and nothing else
Читать полностью…Two different problems. Transit is TLS. At rest is currently not supported in PG, several companies offer TDE though.
Читать полностью…Start with defining indicators (queries per second, average query response time, slow query threshold and limits, etc) and their target values.
Then make a good test stand to measure that indicators (or something close to them, as this possible would not be a production database).
Thereafter you would be able to say something exact about your query performance.
Then you could take a maximum-options explain (probably, with auto_explain) and calculate — what could be optimised. Or talk with us to get some new ideas about optimisation.
It is PG 14, we have plans to move to 17, but will take some time...
Читать полностью…Yes, surely this is data selectivity dependent.
Unfortunately moving it to Valkey is not an option due to business logic, as it does permission lookup function while other performing DB operation asyc in the same call
👋 hey everyone,
LW:BufferMapping locks.
We are encountering spikes in this locks at a few different times.
The culprit SQL is light weight select operation, using prefix matches for arrays, however ending up preferring to do a Seq scan.
Forcing Index, is not proving optimal either. The table size is very small (~500k records) & table remains in shared_buffer for sure.
This very large and highly transaction DB (multi TB in size) with ~80-100k executions per min.
So far, I have tried:
1. diff index optimisations with no luck, optimiser prefers seq scan.
2. increased shared_buffers(30% to 40%) which is ~300+ GB memory. We do use huge pages, and planning to increase size from 2 mb to 1gb.
3. No bloat (ready heavy table), vacuum + analyse done
I am keen to know if there is anything else we can try out other than distributing load across diff instances (eg. sync replica, or sharded DB with copy of this small reference table maintained elsewhere?)
Cause that could have turned up
https://www.postgresql.org/docs/current/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-DISK