Monday, February 15, 2010

Data Loads and Loading Process

Loading process:

1. Select Source data target( in u r case X) , in the context menu click on Create Export Datasources.
DataSource ( InfoSource) with name 8(name of datatarget) will be generated.

2. In Modelling menu click on Source Systems, Select the logical Source System of your BW server, in the context menu click on Replicate DataSource.

3. In the DataModelling click on Infosources and search for infosource 8(name of datatarget). If not found in the search refresh it. Still not find then from DataModelling click on Infosources, in right side window again select Infosources, in the context menu click on insert Lost Nodes.
Now search you will definately found.

4. No goto Receiving DataTargets ( in your case Y1,Y2,Y3) create update rules.
In the next screen select Infocube radio button and enter name of Source Datatarget (in u r case X). click Next screen Button ( Shift F7), here select Addition radio button, then select Source keyfield radio button and map the keyfields form Source cube to target cube.

5. In the DataModelling click on Infosources select infoSource which u replicated earlier and create infopackage to load data..

Tuesday, January 26, 2010

SAPBW interview questions

What are the extractor types?
Application Specific
o BW Content FI, HR, CO, SAP CRM, LO Cockpit
o Customer-Generated Extractors
LIS, FI-SL, CO-PA
Cross Application (Generic Extractors)
o DB View, InfoSet, Function Module

What are the steps involved in LO Extraction?
The steps are:
o RSA5 Select the DataSources
o LBWE Maintain DataSources and Activate Extract Structures
o LBWG Delete Setup Tables
o 0LI*BW Setup tables
o RSA3 Check extraction and the data in Setup tables
o LBWQ Check the extraction queue
o LBWF Log for LO Extract Structures
o RSA7 BW Delta Queue Monitor

How to create a connection with LIS InfoStructures?
LBW0 Connecting LIS InfoStructures to BW

What is the difference between ODS and InfoCube and MultiProvider?
• ODS: Provides granular data, allows overwrite and data is in transparent tables, ideal for drilldown and RRI.
• CUBE: Follows the star schema, we can only append data, ideal for primary reporting.
• MultiProvider: Does not have physical data. It allows to access data from different InfoProviders (Cube, ODS, InfoObject). It is also preferred for reporting.

What are Start routines, Transfer routines and Update routines?
Start Routines: The start routine is run for each DataPackage after the data has been written to the PSA and before the transfer rules have been executed. It allows complex computations for a key figure or a characteristic. It has no return value. Its purpose is to execute preliminary calculations and to store them in global DataStructures. This structure or table can be accessed in the other routines. The entire DataPackage in the transfer structure format is used as a parameter for the routine.
Transfer / Update Routines: They are defined at the InfoObject level. It is like the Start Routine. It is independent of the DataSource. We can use this to define Global Data and Global Checks.

What is the difference between start routine and update routine, when, how and why are they called?
Start routine can be used to access InfoPackage while update routines are used while updating the Data Targets.

What is the table that is used in start routines?
Always the table structure will be the structure of an ODS or InfoCube. For example if it is an ODS then active table structure will be the table.

Explain how you used Start routines in your project?
Start routines are used for mass processing of records. In start routine all the records of DataPackage is available for processing. So we can process all these records together in start routine. In one of scenario, we wanted to apply size % to the forecast data. For example if material M1 is forecasted to say 100 in May. Then after applying size %(Small 20%, Medium 40%, Large 20%, Extra Large 20%), we wanted to have 4 records against one single record that is coming in the info package. This is achieved in start routine.

What are Return Tables?
When we want to return multiple records, instead of single value, we use the return table in the Update Routine. Example: If we have total telephone expense for a CostCenter, using a return table we can get expense per employee.

How do start routine and return table synchronize with each other?
Return table is used to return the Value following the execution of start routine

What is the difference between V1, V2 and V3 updates?
V1 Update: It is a Synchronous update. Here the Statistics update is carried out at the same time as the document update (in the application tables).
V2 Update: It is an Asynchronous update. Statistics update and the Document update take place as different tasks.
V1 & V2 don't need scheduling.
Serialized V3 Update: The V3 collective update must be scheduled as a job (via LBWE). Here, document data is collected in the order it was created and transferred into the BW as a batch job. The transfer sequence may not be the same as the order in which the data was created in all scenarios. V3 update only processes the update data that is successfully processed with the V2 update.

What is compression?
• It is a process used to delete the Request IDs and this saves space.

What is Rollup?
• This is used to load new DataPackages (requests) into the InfoCube aggregates. If we have not performed a rollup then the new InfoCube data will not be available while reporting on the aggregate.

What is table partitioning and what are the benefits of partitioning in an InfoCube?
• It is the method of dividing a table which would enable a quick reference. SAP uses fact file partitioning to improve performance. We can partition only at 0CALMONTH or 0FISCPER. Table partitioning helps to run the report faster as data is stored in the relevant partitions. Also table maintenance becomes easier. Oracle, Informix, IBM DB2/390 supports table partitioning while SAP DB, Microsoft SQL Server, IBM DB2/400 do not support table portioning.

How many extra partitions are created and why?
• Two partitions are created for date before the begin date and after the end date.

What are the options available in transfer rule?
• InfoObject
• Constant
• Routine
• Formula

How would you optimize the dimensions?
We should define as many dimensions as possible and we have to take care that no single dimension crosses more than 20% of the fact table size.

What are Conversion Routines for units and currencies in the update rule?
Using this option we can write ABAP code for Units / Currencies conversion. If we enable this flag then unit of Key Figure appears in the ABAP code as an additional parameter. For example, we can convert units in Pounds to Kilos.

Can an InfoObject be an InfoProvider, how and why?
Yes, when we want to report on Characteristics or Master Data. We have to right click on the InfoArea and select "Insert characteristic as data target". For example, we can make 0CUSTOMER as an InfoProvider and report on it.

What is Open Hub Service?
The Open Hub Service enables us to distribute data from an SAP BW system into external Data Marts, analytical applications, and other applications. We can ensure controlled distribution using several systems. The central object for exporting data is the InfoSpoke. We can define the source and the target object for the data. BW becomes a hub of an enterprise data warehouse. The distribution of data becomes clear through central monitoring from the distribution status in the BW system.

How do you transform Open Hub Data?
Using BADI we can transform Open Hub Data according to the destination requirement.

What is ODS?
Operational DataSource is used for detailed storage of data. We can overwrite data in the ODS. The data is stored in transparent tables.

What are BW Statistics and what is its use?
They are group of Business Content InfoCubes which are used to measure performance for Query and Load Monitoring. It also shows the usage of aggregates, OLAP and Warehouse management.

What are the steps to extract data from R/3?
Replicate DataSources
Assign InfoSources
Maintain Communication Structure and Transfer rules
Create and InfoPackage
Load Data

What are the delta options available when you load from flat file?
The 3 options for Delta Management with Flat Files:
o Full Upload
o New Status for Changed records (ODS Object only)
o Additive Delta (ODS Object & InfoCube)

Under which menu path is the Test Workbench to be found, including in earlier Releases?
The menu path is: Tools - ABAP Workbench - Test - Test Workbench.

Errors while monitoring process chains.
During data loading. Apart from them, in process chains you add so many process types, for example after loading data into Info Cube, you rollup data into aggregates, now this rolling up of data into aggregates is a process type which you keep after the process type for loading data into Cube. This rolling up into aggregates might fail. Another one is after you load data into ODS, you activate ODS data (another process type) this might also fail.

In Monitor----- Details (Header/Status/Details) à Under Processing (data packet): Everything OK à Context menu of Data Package 1 (1 Records): Everything OK ---- Simulate update. (Here we can debug update rules or transfer rules.)
SM50 à Program/Mode à Program à Debugging & debug this work process.

Can we make a datasource to support delta.
If this is a custom (user-defined) datasource you can make the datasource delta enabled. While creating datasource from RSO2, after entering datasource name and pressing create, in the next screen there is one button at the top, which says generic delta. If you want more details about this there is a chapter in Extraction book, it's in last pages u find out.

Generic delta services: -
Supports delta extraction for generic extractors according to:
Time stamp
Calendar day
Numeric pointer, such as document number & counter
Only one of these attributes can be set as a delta attribute.
Delta extraction is supported for all generic extractors, such as tables/views, SAP Query and function modulesThe delta queue (RSA7) allows you to monitor the current status of the delta attribute