Discussion:
trouble with upgrade from 9.0 (many schemas and tables)
(too old to reply)
Groshev Andrey
2013-01-30 05:23:22 UTC
Permalink
Hello!
I update the databases to version 9.1.
Today, faced with a new challenge.
The database itself is not very big 60G, but has ~ 164,000 tables in 1260 schemes.

I tried and pg_upgrade and pg_dumpall and pg_dump.
But they all work very, very long time.
For so long that I do not have patience.
And pg_dump worked for almost a day, gave "out off memory"
(but this experience made a colleague, so vouch for the accuracy can not).

I tried to just dump schemes-only, but also the procedure lasted more than 17 hours and not over .... I ran out of patience.

If I am not mistaken, all of these programs work on approximately the same algorithm.
At the beginning of block all the tables in all schemas, indexes, and so then....

So I do not see a way to significantly speed up the process.

The only thing that came up while doing a dump on each schema.
But will it be the right approach?
Maybe tell me what to do?
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Jeff Janes
2013-01-30 14:47:38 UTC
Permalink
Post by Groshev Andrey
Hello!
I update the databases to version 9.1.
Today, faced with a new challenge.
The database itself is not very big 60G, but has ~ 164,000 tables in 1260 schemes.
Are the schemas all about the same size (in number of tables)?

There are several places in the code which are quadratic in the number
of tables being dumped and restored. There have been many fixes
recently in 9.2 and especially (future) 9.3, but unfortunately you
have to upgrade in order to get those benefits.
Post by Groshev Andrey
I tried and pg_upgrade and pg_dumpall and pg_dump.
But they all work very, very long time.
For so long that I do not have patience.
And pg_dump worked for almost a day, gave "out off memory"
Was it exactly that, or was it about shared memory for holding locks?
...
Post by Groshev Andrey
The only thing that came up while doing a dump on each schema.
But will it be the right approach?
Unfortunately, that might be your best option to get around the
quadratic behavior.

You would probably want to use the pg_dump from 9.2, as there are
improvements in that version of pg_dump to speed up partial dumps.
You can use pg_dump from 9.2 against server 9.0 and still get the
improvements. But that means you should be upgrading to 9.2 rather
than 9.1. (Which you should probably do anyway unless you have a
specific reason not to.)

Cheers,

Jeff
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Groshev Andrey
2013-01-31 06:47:44 UTC
Permalink
Post by Jeff Janes
 Hello!
 I update the databases to version 9.1.
 Today, faced with a new challenge.
 The database itself is not very big 60G, but has ~ 164,000 tables in 1260 schemes.
Are the schemas all about the same size (in number of tables)?
Yes, 130 tables, 120 triggers, 700 functions.
Only some schemas different from them.
Post by Jeff Janes
There are several places in the code which are quadratic in the number
of tables being dumped and restored.  There have been many fixes
recently in 9.2 and especially (future) 9.3, but unfortunately you
have to upgrade in order to get those benefits.
 I tried and pg_upgrade and pg_dumpall and pg_dump.
 But they all work very, very long time.
 For so long that I do not have patience.
 And pg_dump worked for almost a day, gave "out off memory"
Was it exactly that, or was it about shared memory for holding locks?
...
Unfortunately, I can't say anything about memory problems.
This post was my colleague.
In my attempt to not wait ... but I waited 17 hours.
Post by Jeff Janes
 The only thing that came up while doing a dump on each schema.
 But will it be the right approach?
Unfortunately, that might be your best option to get around the
quadratic behavior.
Yes, some string in shell script.
Before me, this problem is solved as well.
Post by Jeff Janes
You would probably want to use the pg_dump from 9.2, as there are
improvements in that version of pg_dump to speed up partial dumps.
You can use pg_dump from 9.2  against server 9.0 and still get the
improvements.  But that means you should be upgrading to 9.2 rather
than 9.1.  (Which you should probably do anyway unless you have a
specific reason not to.)
Cheers,
Jeff
Now think about it. Try to make 9.0 -> 9.2 -> 9.1 ?

Thank you!
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Jeff Janes
2013-02-02 01:21:54 UTC
Permalink
Post by Groshev Andrey
Post by Jeff Janes
You would probably want to use the pg_dump from 9.2, as there are
improvements in that version of pg_dump to speed up partial dumps.
You can use pg_dump from 9.2 against server 9.0 and still get the
improvements. But that means you should be upgrading to 9.2 rather
than 9.1. (Which you should probably do anyway unless you have a
specific reason not to.)
Now think about it. Try to make 9.0 -> 9.2 -> 9.1 ?
Downgrading is generally not supported. Why not just stick with 9.2 as the
end target? If you really need 9.1, I would not entertain the thought of
using 9.2's pg_dump and then trying to go backwards, at least not until I
tried 9.1's dump and proved that it was a serious bottleneck.

Cheers,

Jeff

Tom Lane
2013-01-30 14:54:20 UTC
Permalink
Post by Groshev Andrey
I update the databases to version 9.1.
Today, faced with a new challenge.
The database itself is not very big 60G, but has ~ 164,000 tables in 1260 schemes.
I tried and pg_upgrade and pg_dumpall and pg_dump.
But they all work very, very long time.
This isn't very clear. Are you saying that you successfully updated to
9.1 and now you have a pg_dump performance problem you didn't have before?
Or are you still on 9.0 and you are just now finding out that you can't
pg_dump (that is, you don't have any routine backup policy)?

Also, 9.0.what and 9.1.what? We've fixed a number of pg_dump
performance issues with huge schemas in the last year or two, but it's
impossible to guess whether you have those fixes.

regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Loading...