Hello Friends,
I have a query. The scenario is that I want to process a large database table, say a BUT000 table and want to change some non-key field values.
A simple approach would be to
-execute a SELECT statement and retrieve the records in an internal table.
-change the values in the internal table.
- then update the database table from the internal table.
However, for large tables, performance issue will occur and the program might timeout.
So one solution would be to run a report in batches in background, each report processing 100 entries of the table.
The algorithm would be somewhat like this (Please ignore syntax):
OPEN CURSOR c1 FOR SELECT * FROM <db_table>
DO till all the records are processed.
FETCH 100 records INTO lt_tab[ ].
SUBMIT change_report WITH lt_tab[ ] IN BACKGROUND mode.
ENDDO.
- One way to achieve this is to use the EXPORT/IMPORT MEMORY of ABAP. However for this to work, all these SUBMITted reports need to be executed in the same work process. But we know that REPORTs in BACKGROUND job will work on different work processes. So this is not the solution.
- Another way is to use EXPORT/IMPORT to SHARED MEMORY or SHARED BUFFER. However, this will store the internal tables in the application buffer of the aplication server. So in this case, all the background reports need to be executed in the same application server so that they have access to the shared memory.
- One more solution is to EXPORT/IMORT the data to DATABASE cluster tables like INDX cluster table.
EXPORT itab FROM itab TO DATABASE indx(ar) CLIENT sy-mandt ID job_number
IMPORT itab TO itab FROM DATABASE indx(ar) CLIENT sy-mandt ID job_number
However, I am not sure if this will work cross-application-servers. My guess is this solution should hold even if the background reports are run in different application servers of the system.
Could anyone guide me as to whether my understanding of the above 3 points is correct? Any comments on Point 3 ?
Are there any more solutions to this scenario?
With Kind regards,
Sameer.