Wednesday, February 15, 2012

DWBI Interview Questions (Ref: blog.sqlauthority.com)

1. What is Data Warehousing?
It is a repository of integrated information, available for queries and analysis. A data warehouse is the main repository of an organization’s historical data, it contains the data for management’s decision support system.
2. What is Business Intelligence (BI)?
Business Intelligence (BI) refers to technologies, applications and practices for the collection, integration, analysis, and presentation of business information and sometimes to the information itself. The purpose of BI is to support better business decision making.
3. What is the Difference between Data Warehousing and Business Intelligence?
Data warehousing deals with all aspects of managing the development, implementation and operation of a data warehouse or data mart, including meta data management, data acquisition, data cleansing, data transformation, storage management, data distribution, data archiving, operational reporting, analytical reporting, security management and backup/recovery planning. Business intelligence, on the other hand, is a set of software tools that enable an organization to analyze measurable aspects of their business such as sales performance, profitability, operational efficiency, effectiveness of marketing campaigns, market penetration among certain customer groups, cost trends, anomalies and exceptions. Typically, the term ’business intelligence’ is used to encompass OLAP, data visualization, data mining and query/reporting tools.
4. What is a Dimension Table?
Dimensional table is a collection of hierarchies, categories and logic which can be used for user to traverse in hierarchical nodes
5. What is Dimensional Modeling?
Dimensional data model concept involves two types of tables and it is different from the 3rd normal form. This concept uses Facts table, which contains the measurements of the business, and Dimension table, which contains the context (dimension of calculation) of the measurements.
6. What is a Fact Table?
Fact table contains measurements of business process. Fact table contains the foreign keys for the dimension tables.
7. What are the Fundamental Stages of Data Warehousing?
There are four different fundamental stages of Data Warehousing.
Offline Operational Databases: Data warehouses in this initial stage are developed by simply copying the database of an operational system to an off-line server where the processing load of reporting does not impact on the operational system’s performance.
Offline Data Warehouse: Data warehouses in this stage of evolution are updated on a regular time cycle (usually daily, weekly or monthly) from the operational systems, and the data is stored in an integrated reporting-oriented data structure
Real Time Data Warehouse: Data warehouses at this stage are updated on a transaction or event basis, every time an operational system performs a transaction (e.g. an order or a delivery or a booking)
Integrated Data Warehouse: Data warehouses at this stage are used to generate activity or transactions that are passed back into the operational systems for use in the daily activity of the organization.

8. What are the Different Methods of Loading Dimension tables?
There are two different ways to load data in dimension tables.
Conventional (Slow): All the constraints and keys are validated against the data before it is loaded; this way data integrity is maintained.
Direct (Fast): All the constraints and keys are disabled before the data is loaded. Once data is loaded, it is validated against all the constraints and keys. If data is found invalid or dirty, it is not included in index, and all future processes on this data are skipped.
9. What is Data Mining?
Data Mining is the process of analyzing data from different perspectives and summarizing it into useful information.
10. What is OLTP?
OLTP is abbreviation of On-Line Transaction Processing. This system is an application that modifies data. At the very instant it is received and has a large number of concurrent users.
11. What is OLAP?
OLAP is abbreviation of Online Analytical Processing. This system is an application that collects, manages, processes and presents multidimensional data for analysis and management purposes.
12. What is the Difference between OLTP and OLAP?

Data Source

OLTP: Operational data is from original data source of the data.
OLAP: Consolidation data is from various sources.

Process Goal

OLTP: Snapshot of business processes which do fundamental business tasks.
OLAP: Multi-dimensional views of business activities of planning and decision making.

Queries and Process Scripts

OLTP: Simple quick running queries ran by users.
OLAP: Complex long running queries by system to update the aggregated data.

Database Design

OLTP: Normalized small database. Speed will be not an issue because of a small database, and normalization will not degrade performance. This adopts the entity relationship (ER) model and an application-oriented database design.
OLAP: De-normalized large database. Speed is an issue because of a large database and de-normalizing will improve performance as there will be less tables to scan while performing tasks. This adopts star, snowflake or fact constellation mode of subject-oriented database design.

Back up and System Administration

OLTP: Regular Database backup and system administration can do the job.
OLAP: Reloading the OLTP data is considered as a good backup option.

13. What is ODS?
ODS is the abbreviation of Operational Data Store ‑ a database structure that is a repository for near real-time operational data rather than long-term trend data.
14. What is ER Diagram?
Entity Relationship (ER) Diagrams are a major data modeling tool and will help organize the data in your project into entities and define the relationships between the entities. This process has enabled the analyst to produce a good database structure so that the data can be stored and retrieved in a most efficient manner.
An entity-relationship (ER) diagram is a specialized graphic that illustrates the interrelationships between entities in a database. A type of diagram used in data modeling for relational data bases. These diagrams show the structure of each table and the links between tables.
15. What is ETL?
ETL is abbreviation of extract, transform, and load. ETL is software that enables businesses to consolidate their disparate data while moving it from place to place, and it doesn’t really matter that that data is in different forms or formats. The data can come from any source. ETL is powerful enough to handle such data disparities. First, the extract function reads data from a specified source database and extracts a desired subset of data. Next, the transform function works with the acquired data – using rules or lookup tables, or creating combinations with other data – to convert it to the desired state. Finally, the load function is used to write the resulting data to a target database.
16. What is VLDB?
VLDB is abbreviation of Very Large Database. For instance, a one-terabyte database can be considered as a VLDB. Typically, these are decision support systems or transaction processing applications serving a large number of users.
17. Is OLTP Database is Design Optimal for Data Warehouse?
No. OLTP database tables are normalized, and it will add additional time to queries to return results. Additionally, the OLTP database is small; it does not contain data from a long period (many years), which needs to be analyzed. A OLTP system is basically an ER model and not a Dimensional Model. If a complex query is executed on an OLTP system, it may lead to heavy overhead on the OLTP server that will affect the normal business processes.
18. What are Lookup Tables?
A lookup table is the table placed on the target table based upon the primary key of the target; it just updates the table by allowing only modified (new or updated) records based on the lookup condition.
19. What are Aggregate Tables?
An aggregate table contains the summary of existing warehouse data, which is grouped to certain levels of dimensions. It is always easy to retrieve data from aggregated tables than visiting original table which has millions of records. Aggregate tables reduce the load in the database server and improve the performance of the query, and they also can retrieve the result quickly.
20. What is Real-Time Data-Warehousing?
Data warehousing captures business activity data. Real-time data warehousing captures business activity data as it occurs. As soon as the business activity is complete and there is data about it, the completed activity data flows into the data warehouse and becomes available instantly.
21. What are Conformed Dimensions?
Conformed dimensions are the dimensions which can be used across multiple Data Marts in combination with multiple facts tables accordingly.
22. How do you Load the Time Dimension?
Time dimensions are usually loaded by a program that loops through all possible dates that may appear in the data. 100 years may be represented in a time dimension, with one row per day.

23. What is a Level of Granularity of a Fact Table?
Level of granularity means the level of detail that you put into the fact table in a data warehouse. Level of granularity implies the detail you are willing to put for each transactional fact.
24. What are Non-Additive Facts?
Non-additive facts are facts that cannot be summed up for any of the dimensions present in the fact table. However, they are not considered as useless. If there are changes in dimensions, the same facts can be useful.
25. What is a Factless Facts Table?
A fact table that does not contain numeric fact columns is called a factless facts table.
26. What are Slowly Changing Dimensions (SCD)?
SCD is the abbreviation of slowly changing dimensions. SCD applies to cases where the attribute for a record varies over time. There are three different types of SCD.
  1. SCD1: The new record replaces the original record. Only one record exists in database – current data.
  2. SCD2: A new record is added into the customer dimension table. Two records exist in the database – current data and previous history data.
  3. SCD3: The original data is modified to include new data. One record exists in database – new information is attached with old information in same row.
27. What is Hybrid Slowly Changing Dimension?
Hybrid SCDs are combination of both SCD 1 and SCD 2. It may happen that in a table, some columns are important and we need to track changes for them, i.e. capture the historical data for them, whereas in some columns even if the data changes, we do not care.
28. What is MDS?
Master Data Services helps enterprises standardize the data people rely on to make critical business decisions. With Master Data Services, IT organizations can centrally manage critical data assets company wide and across diverse systems, enable more people to securely manage master data directly, and ensure the integrity of information over time.
29. What is a Data Mart?
A data mart (DM) is a specialized version of a data warehouse (DW). Like data warehouses, data marts contain a snapshot of operational data that helps business people to strategize based on analyses of past trends and experiences. The key difference is that the creation of a data mart is predicated on a specific, predefined need for a certain grouping and configuration of select data.
30. What is a Surrogate Key?
A surrogate key is a substitution for the natural primary key. It is just a unique identifier or number for each row that can be used for the primary key to the table. The only requirement for a surrogate primary key is that it should be unique for each row in the table. It is useful because the natural primary key can change and this makes updates more difficult. Surrogated keys are always integer or numeric.
31. What is Junk Dimension?
A number of very small dimensions may get lumped together to form a single dimension, i.e. a junk dimension – the attributes are not closely related. Grouping of Random flags and text Attributes in a dimension and moving them to a separate sub dimension is known as junk dimension.
32. What is Degenerate Dimension Table?
If a table contains values, which are neither dimension nor measures, then it is called a degenerate dimension table.
33. Why is Data Modeling Important?
Data modeling is probably the most labor intensive and time consuming part of the development process. The goal of the data model is to make sure that the all data objects required by the database are completely and accurately represented. Because the data model uses easily understood notations and natural language, it can be reviewed and verified as correct by the end users.
In computer science, data modeling is the process of creating a data model by applying a data model theory to create a data model instance. A data model theory is a formal data model description. In data modeling, we are structuring and organizing data. These data structures are then typically implemented in a database management system. In addition to defining and organizing the data, data modeling will impose (implicitly or explicitly) constraints or limitations on the data placed within the structure.
34. What is Difference between ER Modeling and Dimensional Modeling?
ER modeling is used for normalizing the OLTP database design. Dimensional modeling is used for de-normalizing the ROLAP/MOLAP design.
35. What is Degenerate Dimension Table?
If a table contains values, which are neither dimension nor measures, then it is called a degenerate dimension table.
36. What is BUS Schema?
BUS Schema consists of a master suite of confirmed dimension and standardized definition of facts.
37. What is a Star Schema?
Star schema is a type of organizing the tables such that we can retrieve the result from the database quickly in the warehouse environment.
38. What Snow Flake Schema?
In Snowflake Schema, each dimension has a primary dimension table, to which one or more additional dimensions can join. The primary dimension table is the only table that can join to the fact table.
39. Differences between the Star and Snowflake Schema?
Star schema: A single fact table with N number of dimensions; all dimensions will be linked directly with a fact table. This schema is de-normalized and results in simple join and less complex query as well as faster results.
Snow schema: Any dimension with extended dimensions is known as snowflake schema; dimensions maybe interlinked or may have one-to-many relationship with other tables. This schema is normalized, and results in complex join leading very complex query (as well as slower results).










               

1 comment: