wiki:public/20131021

Project: Next Generation Operational Business Intelligence using the example of the bake-off process

Team: Alexander Gossmann (Research Assistant), Tobias Klumpp, Robert Hoppe, Jakob Huber, Benjamin Scheffold, Natalia Polenok, Manuel Hauptmann, Niklas Dürr, Fabian Frick, Sebastian Thomes, Mahei Li, Tim Kochanek, Subhajit Mukherjee

Research institution: University of Mannheim (Research Group Information Systems)

Abstract: Nowadays business decisions are driven by the need of having a holistic view of the whole value chain, throughout the strategic, tactical and operational level. Reducing the processing time between these organizational levels becomes more and more an important competitive factor. State of the art data warehouse approaches use an iterative process to extract, transform and load data bottom up into the appropriate aggregation level of the desired information (cf. ETL process). This leads to a highly complex and time consuming replication of data to target different business scenarios. In this context for analytical, as well as for operational purposes, the standard ETL process is not sufficient.

This project is investigating an use case in the field of fast moving goods of a large discount food retail organization. Specifically the so called bake-off environment is taken into account, as here the tradeoff between product availability and loss is extremely high. Bake-off units reside in each store and are being charged with prebaked pastries, based on the expected demand. For the given use case an observation period of two years is considered. The basic population consists of fine grained data for thousands of bake-off units, providing all facts, regarding the bakery process. On the one hand placing orders in the day to day business requires on the fly data processing to increase the quality of the demand forecast. Primarily for order recommendations, a certain amount of historical data has to be taken into account, to satisfy the appropriate statistical calculation on time series. Additionally location related and environmental information increases the accuracy of the forecasting model. On the other hand the strategic decision makers need a flexible way to 'navigate' through the data, to achieve fast time to market decisions. For accurate decisions it is tremendously important to drill down to the line level, to find the right reasons, either to indicate peaks and their origin.

These requirements fit most to the idea of having SAP HANA as single version of truth, without replicating data in redundant data structures, in order to achieve different business views. All application features will rely on one single data foundation, by exhausting the capabilities of data views and procedural programming for calculations. For demand forecast, using correlated data, the R environment as the statistical engine will be used, as supported by SAP HANA.

https://www.hpi.uni-potsdam.de/future_soc_lab/wiki/raw-attachment/wiki/public/20122003/Achitecture.png

Last modified 6 years ago Last modified on Apr 16, 2013 3:44:49 PM