Hi All,
I am facing a problem when I run my DTP with package size say e.g. 20000.
I do get data from DSO into Cube. I also append the values in the Start Routine into the Sourec_package comparing the Employee master data if there are any missing records in the DSO.
So if I run for say 50000 records in DSO and Employee master has some 70000 records, the cube should contain 50,000 + difference 20000 total 70,000.
I know I can change the design to directly read from Employee master data but that option is ruled out for some business reasons.
Hence my question is while running my job which has 20000 per package, in parallel processing.
can you pls say will the DSO data is divided by the package value and system tries to process all the Packages parallelly ???
Please say something in detail.
Appreciate your thoughts. Also please let me kow which Routine is best performance wise, Start or End Routine.
I understand that if we need to use the data modified in Start routine is being used anywehre in the Field level routines, then we usually write code in Start Routine.
But anythoughts specifically while moving data from DSO to CUBE, Adding some new records based on Employee Master data if the records does not exist in the DSO, in this scenario, can anyone elobarate please.
Kind Regards,
Krishna