Hi,
We have a new 'large table' that will be containing 20-30 million rows in the future.
From time to time we wil have to clean up the data selectively using statement "DELETE FROM zct_epr_exch WHERE acjaar = gv_acad" where ACJAAR is indexed. This statement is followed immediately by a "COMMIT WORK" statement.
I have some questions about it.
This deletion takes some time to delete approximately 15 million records from the table. We cannot use a program like RADCVBTC to delete all the content. Is there a faster 'trick' than the statement above to speed up this deletion?
At this moment the deletion is a complete clean-up - there is only one ACJAAR available, since we are just starting larger tests. After deleting all lines from the table (15 mio), it still takes 4 to 5 minutes to read anything from it (the result is always zero, since the table is "empty").
When checking the DBCOCKPIT we see that all indexes still exist on the empty table - as wel primary as secondary indexes still contain (a lot of) data!.
Are we using the wrong technique to do this deletion?
Is there a way to avoid this problem and to be sure the indexes are removed? Of course we can manually reorganize the table, but that should not be the intention of this application.
Thanks for any help.
Kris