2830
English speaking PostgreSQL public chat. This group is for discussions on PostgreSQL-related topics and strives to provide best-effort support as well.
If u don't have enough money u do docker compose up.
If u have enough money u do something complex on your servers.
service, so no need to manage/admin (especially if scale and\or something like greenplum)
Читать полностью…
I think people really go out of their way and disable some built-in checks or may be their AI is doing it for them
Читать полностью…
90% chance of not leaking keys if using llm, very convenient
Читать полностью…
11% of vibe-coded apps are leaking Supabase keys
Article, Comments
Creating Postgres patches using AI – pros and cons? (and as usual, we'll create something) – PostgresTV hacking sessions with Andrey, Kirk, and Nik – LIVE, join! https://www.youtube.com/watch?v=4KVaeJfWPas
Читать полностью…
Hello everyone
My team is using pgvector image for a postgres db pod in openshift
And our cluster admins has announced removal of cgroups v1 from cluster and only cgroups v2 would be there
Can anyone please tell whether this pod would be compatible to the change without any oom kills or any issue
Some more details are as below:
image : pgvector/pgvector:0.8.0-pg16
containers:
- resources:
limits:
cpu: '80'
memory: 200Gi
requests:
cpu: '20'
memory: 50Gi
By then they will have enslaved humanity (Q1156970) and can construct either a simulated human brain (Q7576855) or a simulated reality in which further discoveries can be made (Q83495). Indeed, some propose that the latter has already occurred (Q2742884).
Читать полностью…
it feels like it does that on occasion already with the hallucinations. It's quite frustrating when a LLM tells you methodX exists and then you try to use it or look it up in the docs only to find it doesn't exist at all
Читать полностью…
Yeah except that LLMs will eventually run out of new stuff to crawl
Читать полностью…
Self hosted scales better than cloud. Cloud has generally poor iops, throughput compared to bare metal. Can't run petabyte scale on cloud at all.
Читать полностью…
I have a chat app
My messages table has rls disabled
Still to send a message ut’s required ur session id, so i mean how can people do this
From the CEO (in comments):
Fwiw, the new secret keys are automatically revoked if they are pushed to github, and github is progressively rolling out push protection - to prevent them getting pushed in the first place. Of course, not everyone uses github
People disabling RLS, or making RLS a simple pass-through, is a battle we are constantly fighting. We have made good strides here over the past 12 months:
https://supabase.com/blog/supabase-security-2025-retro
- event triggers to enforce RLS on all tables
- lints to scan for insecure rules
- ai to write secure policies (if they are too lazy or confused to do it themselves)
- big red labels when a table is exposed
- weekly emails with security alerts
- dashboard alerts and security advisors
- contractually requiring Vibe coding platforms to expose our Security Advisors if they are integrating with us
- red teaming customers that have egregious issues (this has been surprisingly effective, just harder to scale up)
I appreciate you creating this tool - as you can see we are also “tooling up” as much as we can. If there are any other things that you think we are missing let me know and we will prioritize it
We will be introducing new AuthZ patterns this year so I’m hoping that will also help
I mean
Just put the key in .dev file for Node js servers
And with frontend shouldnt u just use anonKey? That shouldn’t be a problem
it doesn't help in long run, if you need PG in your project just buy PG, if you need an data abstraction buy something like convex, supabase just add unnecessary complexity that could be replaced with tooling
Читать полностью…
Not saying this applies here, but whenever the word "vibe" is applied to coding or databases it makes me nervous.
Читать полностью…
I think you still use your resource request and limits https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#example-2
Читать полностью…
Percona documentation website has several documents
Читать полностью…
Of course, if they're working on the former, they may have the DRAM crisis to deal with first, as their proposed solution initially involves 8640 GB scaling to 138420 TB. https://blocksandfiles.com/2025/06/06/sandia-turns-on-brain-like-storage-free-supercomputer/
Читать полностью…
How will the LLMs learn anything new 5 years from now when no one has created human content, no more questions properly answered?
Читать полностью…
LLMs are so powerful because they are trained on human knowledge
Once all text is replaced with llm, the machine will start feeding itself and overfit on nonsense
It's a shame because I think chat rooms, forums and places like StackOverflow will be a thing of the past now due to the easy access to LLMs. I must admit I can't remember the last time I posted on StackOverflow
Читать полностью…
2-3 word questions deserved a LLM response
"Supabase yes or no?"