**This is an old revision of the document!**
Table of Contents
DR-3389
page was renamed from dr-3389
See also Database | Postgresql | Index
These are my notes on the Clever Roster Service and DR-3389.
Vacuum
Temp Table Creation
xxx
analyze_table
Function public.analyzetable ( schemaname.tablename )
==== Hot Tables ====
=== Search Results ===
Also, unrelated to your question (but possibly related to your project): keep in mind that, if you have to run queries against a temp table after you have populated it, then it is a good idea to create appropriate indices and issue an ANALYZE on the temp table in question after you're done inserting into it. By default, the cost based optimizer will assume that a newly created the temp table has ~1000 rows and this may result in poor performance should the temp table actually contain millions of rows.
answered Feb 17 '09 at 5:48 by vladr
http://stackoverflow.com/questions/486154/postgresql-temporary-tables
Re: temporary tables, indexes, and query plans
http://postgresql.nabble.com/temporary-tables-indexes-and-query-plans-td3239148.html
http://www.postgresql.org/docs/9.1/static/release-9-1.html - Release Date: 2011-09-12
http://postgresql.nabble.com/do-temporary-tables-have-hint-bits-td3263548.html
http://www.postgresql.org/message-id/00ae01c255f5$25c65aa0$0b01a8c0@johnpark.net
http://www.postgresql.org/message-id/AANLkTin5Z3ie1XBCNs=sjDL=nsbXXERVF1xVnxcE_108@mail.gmail.com
'HEAP-ONLY TUPLES'
Introducing HOT for non-developers
Of course, for HOT to work properly, PostgreSQL has now to follow each HOT chain when SELECT'ing tuples and using an index, but the same amount of tuples version was to be read before HOT too. The difference is that with HOT the new versions of the HOT-updated tuples are no more reachable via the index directly, so PostgreSQL has to follow the chain when reading the heap.
http://pgsql.tapoueh.org/site/html/misc/hot.html
Why did Postgres UPDATE take 39 hours?
I had something similar happen recently with a table of 3.5 million rows. My update would never finish. After a lot of experimenting and frustration, I finally found the culprit. It turned out to be the indexes on the table being updated.
The solution was to drop all indexes on the table being updated before running the update statement. Once I did that, the update finished in a few minutes. Once the update completed, I re-created the indexes and was back in business. This probably won't help you at this point but it may someone else looking for answers.
I'd keep the indexes on the table you are pulling the data from. That one won't have to keep updating any indexes and should help with finding the data you want to update. It ran fine on a slow laptop.
answered Dec 8 '14 at 22:34
I am switching the best answer to you. Since I posted this, I have encountered other situations where indexes are the problem, even if the column being updated already has a value and has no index (!). It seems that Postgres has a problem with how it manages indexes on other columns. There's no reason for these other indexes to balloon a update's query time when the only alteration to a table is to update an unindexed column and you're not increasing the allocated space for any row of that column.
Aren Cambre Dec 9 '14 at 2:04
http://dba.stackexchange.com/questions/15720/why-did-postgres-update-take-39-hours
==== References ====
Reference Postgres Vacuum Documentation: http://www.postgresql.org/docs/9.1/static/sql-vacuum.html
Reference Postgres Analyze Documentation: http://www.postgresql.org/docs/9.1/static/sql-analyze.html
Reference Postgres Locking Documentation: http://www.postgresql.org/docs/9.1/static/explicit-locking.html
