database:dr-3389
Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| database:dr-3389 [2025/10/23 01:23] – timb | database:dr-3389 [2025/10/23 16:44] (current) – removed timb | ||
|---|---|---|---|
| Line 1: | Line 1: | ||
| - | ====== DR-3389 ====== | ||
| - | |||
| - | page was renamed from dr-3389 | ||
| - | |||
| - | See also [[database: | ||
| - | |||
| - | These are my notes on the Clever Roster Service and [[https:// | ||
| - | |||
| - | ==== Vacuum ==== | ||
| - | |||
| - | https:// | ||
| - | |||
| - | ==== Temp Table Creation ==== | ||
| - | |||
| - | xxx | ||
| - | |||
| - | ==== analyze_table ==== | ||
| - | |||
| - | Function public.analyze_table ( ' | ||
| - | |||
| - | ==== Hot Tables ==== | ||
| - | |||
| - | === Search Results === | ||
| - | |||
| - | Also, unrelated to your question (but possibly related to your project): keep in mind that, if you have to run queries against a temp table after you have populated it, then it is a good idea to create appropriate indices and issue an ANALYZE on the temp table in question after you're done inserting into it. By default, the cost based optimizer will assume that a newly created the temp table has ~1000 rows and this may result in poor performance should the temp table actually contain millions of rows. | ||
| - | |||
| - | '' | ||
| - | |||
| - | http:// | ||
| - | |||
| - | Re: temporary tables, indexes, and query plans | ||
| - | |||
| - | http:// | ||
| - | |||
| - | http:// | ||
| - | |||
| - | http:// | ||
| - | |||
| - | http:// | ||
| - | |||
| - | http:// | ||
| - | |||
| - | ''' | ||
| - | |||
| - | Introducing HOT for non-developers | ||
| - | |||
| - | Of course, for HOT to work properly, PostgreSQL has now to follow each HOT chain when SELECT' | ||
| - | |||
| - | http:// | ||
| - | |||
| - | Why did Postgres UPDATE take 39 hours? | ||
| - | |||
| - | I had something similar happen recently with a table of 3.5 million rows. My update would never finish. After a lot of experimenting and frustration, | ||
| - | |||
| - | The solution was to drop all indexes on the table being updated before running the update statement. Once I did that, the update finished in a few minutes. Once the update completed, I re-created the indexes and was back in business. This probably won't help you at this point but it may someone else looking for answers. | ||
| - | |||
| - | I'd keep the indexes on the table you are pulling the data from. That one won't have to keep updating any indexes and should help with finding the data you want to update. It ran fine on a slow laptop. | ||
| - | answered Dec 8 '14 at 22:34 | ||
| - | |||
| - | I am switching the best answer to you. Since I posted this, I have encountered other situations where indexes are the problem, even if the column being updated already has a value and has no index (!). It seems that Postgres has a problem with how it manages indexes on other columns. There' | ||
| - | '' | ||
| - | |||
| - | http:// | ||
| - | |||
| - | ==== References ==== | ||
| - | |||
| - | Reference Postgres Vacuum Documentation: | ||
| - | |||
| - | Reference Postgres Analyze Documentation: | ||
| - | |||
| - | Reference Postgres Locking Documentation: | ||
| - | |||
database/dr-3389.1761182587.txt.gz · Last modified: by timb
