We have an Arcsde v10 / PostgresQL v8.4 geodatabase that has been performing badly the last few weeks when using long transaction for web editing. If we switch to short transactions (unregister as versioned) the system run with good performance. Anyway the system can serve upto ap. 20 concurrent webeditors without any hickups. Despite no featuredatasets is registrated as versioned the system is still producing state lineages. How come?
The database is compressed twice a day. Autovacuum is running as default. A year ago we experienced performance problems and we were recommended to run "full vacuum" with analyze. It was a very helpfull - the database performed well again. Therefore we have done this on a weekly basis ever since. We have notised that in the PostgresQL documentation it is stated that this method can degrade performance over time because indexes not are treated the right way.
The sdegdb has been part of 2 ways replica with another sdegdb for almost ½ a year. Data has been syncronised daily by an automated process (Python scripts). One month ago the replica was unregistrated in Replica Manager by a mistake without a final synchronising - replication was not needed anymore. Data is exported every day so no data was lost.
Though it is possible to make af full Compress to state 0 - the table sde_state_lineages is still rather large - about 8 GB after the sdedb has been vacuumed. It is not possible to shrink it below this limit. When no datasets is versioned anymore and no replica version exist can this be right then? Can there be hidden system tables containing trash data. If so will it be possible to clean system? If the systemtables is not the problem what could be the problem most likely? - any suggestions.
We have tried to export/import data in the the DB. No effect.
Features in the sde Gdb: ap. 800 000 polygons
The database is compressed twice a day. Autovacuum is running as default. A year ago we experienced performance problems and we were recommended to run "full vacuum" with analyze. It was a very helpfull - the database performed well again. Therefore we have done this on a weekly basis ever since. We have notised that in the PostgresQL documentation it is stated that this method can degrade performance over time because indexes not are treated the right way.
The sdegdb has been part of 2 ways replica with another sdegdb for almost ½ a year. Data has been syncronised daily by an automated process (Python scripts). One month ago the replica was unregistrated in Replica Manager by a mistake without a final synchronising - replication was not needed anymore. Data is exported every day so no data was lost.
Though it is possible to make af full Compress to state 0 - the table sde_state_lineages is still rather large - about 8 GB after the sdedb has been vacuumed. It is not possible to shrink it below this limit. When no datasets is versioned anymore and no replica version exist can this be right then? Can there be hidden system tables containing trash data. If so will it be possible to clean system? If the systemtables is not the problem what could be the problem most likely? - any suggestions.
We have tried to export/import data in the the DB. No effect.
Features in the sde Gdb: ap. 800 000 polygons