Problem with toomany data in the database because of history records


Right now we are using cmdbuild version 2.5.1 and we are running into the problem of simple insert/update queries taking too long. Every week we insert around 50.000 new records and also delete around 50.000 records. But every week it is taking longer. Right now we have a class with 2 domains. The class and both domains have 8m rows each. Also the history tables of each class have 3m rows.

When i run an explain analyze in PostgreSQL i can tell that the triggers are taking most of the time. Specially when deleting the trigger “CascadeDeleteOnRelations” takes a very long time. I have tried to tune the configuration of Postgres but this did not have any impact.

Im assuming its a combination of the large amount of rows on the tables and the triggers that it needs to execute everytime a row gets inserted or updated. Is there anyway for me to delete all the history rows and the rows that dont have the status ‘A’ ? Or is there any other solution i should be looking at when saving this much data ?
Kind regards,


Appearantly when updating a row to ‘N’ it triggers the trigger CascadeDeleteOnRelations, what it does during this trigger is to update all rows that have the Status ‘A’ to Status ‘N’ that have a relationship with the updated record. But it checks for everything under the table “Map” which is the “Superclass” for every relation table that is in the database therefore checking ALL of the map tables to be updated. I made new similair triggers but i specified only the “Map” tables that actually needed updating( So only the Map tables that have relations with the class that i was trying to update). This inreased my query time from 0.8MS to 0.030MS which is alot of time saved when updating almost 50000 rows a week :slight_smile:.