2830
English speaking PostgreSQL public chat. This group is for discussions on PostgreSQL-related topics and strives to provide best-effort support as well.
PostgreSQL Person of the Week interview with: Doug Ortiz
https://postgresql.life/post/doug_ortiz/
In fact, I don't want to make suggestions before seeing the table structure and the "fast" plan — because there are drastically different possibilities. It may be easy to select all that by some another index, and it may be just easy to get random 5 records — and hard to get something usably from the table.
Читать полностью…
The optimizer is forced to use scan backward, just because after filtering it sorts the data as desc, so so yeah logically it takes time a minute maybe or even more.
However, when I have removed the order by clause, the query runs as expected and within no time.
my question is: does partial or composite index helps more?
for composite index, my already filtered index including the date and account number columns, but it doesn't contain the trxnrecordstatus column....
so I guess if I added this column too, and have arranged the order of the columns to be the first column to include is ( account number, date, trxnrecordstatus), and also include desc keyword in the index itself like this ( account number, date desc, trxnrecordstatus)
so, my perception on this, I guess I'll not use the order by clause, so I guess the composite index is the most one related here, I guess....
what do u think?
And why I'm waiting for table definition — it also would be beneficial to show a correct
EXPLAIN (ANALYZE, VERBOSE, BUFFERS, COSTS, SETTINGS, TIMING, WAL) <query>;
What kind of failure, btw? It shouldn't if you didn't specify some limits on query time.
This may indicate serious database corruption.
I'm confused about the use of order and order + 0, does order + 0 mean that the table is already sorted? because it is used when no additional sorting is needed.
It's very common performance issue that occurs in several RDBS.
Read this
https://pganalyze.com/blog/5mins-postgres-planner-order-by-limit
1) PostgreSQL version: 13
2) query text:
SELECT *
FROM stcpay.statements_merchants merchants0
WHERE merchants0.accountnumber = '1217796648'
AND merchants0.transacalendardatetime >= 1709977884
AND merchants0.transacalendardatetime <= 1741399845
AND (merchants0.txnrecordstatus IS NULL
OR merchants0.txnrecordstatus IN ('INAU', 'REVE'))
ORDER BY merchants0.transacalendardatetime DESC
LIMIT 5;
3) query plan using (explain) as explain analyse taking too much time and failed at the end:
Limit (cost=0.57..1982.10 rows=5 width=1025)
-> Index Scan Backward using statements_merchants_temp_transacalendardatetime_accountnum_idx on statements_merchants merchants0
(cost=0.57..1371217.92 rows=3460 width=1025)
Index Cond: ((transacalendardatetime >= 1709977884::numeric)
AND (transacalendardatetime <= 1741399845::numeric)
AND (accountnumber = '1217796648'))
Filter: ((txnrecordstatus IS NULL) OR ((txnrecordstatus)::text = ANY (ARRAY['INAU','REVE']::text[])))
To start a smalltalk about a query performance, please provide the following:
1) The PostgreSQL version. Ideally, you can fetch it through:
SELECT version();
EXPLAIN (ANALYZE, VERBOSE, BUFFERS, COSTS, SETTINGS, TIMING, WAL) <query>;
EXPLAIN, please omit them. ANALYZE. In that case, it would be helpful to run simplified or partial queries to better understand the data volumes and execution speeds.psql, the following command will work: \d+ <table>
hello,
I have an issue with a query that takes maybe a minute to finish.
the query is simple and it has an index scan (btree) index on two columns (date and account number) as I'm searching using both.
the cost of the query on dalibo is crazy (780,000) and it suggests that the problem from my query index scan.
what next procedure or steps to do next pls?
Or check out the pg.eu system, we literally just voted there a few days ago.
Читать полностью…
One Trick I have used to work around this issue is to split the query into 2 (Without the sort and limit).
Do WITH qry MATERIALIZED AS <>
SELECT * from qry order by x desc LIMIT 5;
This will (should) materialize the fast query, and then sort and limit just the result set.
Without the MATERIALIZED keyword, it would likely REWRITE the query and create the original problem.
Composite index may work, adding zero to order by may work but you have not posted table structure(\d+) nor explain with analyze+buffers so all of these options may not work as well.
Читать полностью…
/channel/pg_sql/93157
(But you could experiment, of course. And look at explains! This is practically the only introspection into the real algorithms of thus system.)
Often it is easy to force the planner to not use specific indexes, and therefore get to use the neede ones.
However, your mileage may vary.
no, there is no corruption, but when I have seen it takes some times, I used just explain instead
Читать полностью…
Planner gets confused by presence of order and limit together. It happens in Oracle too, which also uses CBO. Order + 0 means any index that is best for sorting is taken out of the equation while planning and so CBO could move on to an index holding filter criteria columns.
Читать полностью…
the documents saying that when filtering, it would be better to have the primary key including with the filtering where clause condition.... but in my case, I have the account number, which is not the primary key here
Читать полностью…
It's taking different index than that starting with account number. You didn't send \d+ so it's just shooting in the dark.
Читать полностью…
note: the query is running good if I omit the order by section
Читать полностью…
Can I send a screen shot of the query and the plan pls instead here?
Читать полностью…
First thing to look at in such a case is usually "Rows removed by filter". Assuming you have an additional condition in your query.
Читать полностью…
Start wih an "EXPLAIN ANALYZE" on https://explain.dalibo.com/, then we can see what's happening
Читать полностью…
https://github.com/pgeu/pgeu-system?tab=readme-ov-file#elections
Читать полностью…