DBMS, data warehousing, and OLAP

Download Report

Transcript DBMS, data warehousing, and OLAP

Data Mining
Lecture 2: DBMS, DW, OLAP,
and Data Preprocessing
Contrasting Database and File
Systems
An Example of a
Simple Relational Database
The Relational Schema for the
SaleCo Database
The Entity Relationship Model
The Development of Data
Models
The Relational Schema for the
TinyCollege Database
The Database System
Environment
Data Warehouse
Data Life Cycle Process Continued
The result - generating knowledge
Methods for Collecting Raw Data
The task of data collection is fairly complex. Which can create data-quality
problem requiring validation and cleansing of data.
• Collection can take place
– in the field
– from individuals
– via manually methods
•
•
•
•
time studies
Surveys
Observations
contributions from experts
– using instruments and sensors
– Transaction processing systems (TPS)
– via electronic transfer
– from a web site (Clickstream)
The Need for Data Analysis
• Managers must be able to track daily
transactions to evaluate how the business is
performing
• By tapping into the operational database,
management can develop strategies to meet
organizational goals
• Data analysis can provide information about
short-term tactical evaluations and strategies
Transforming Operational Data
Into Decision Support Data
The Data Warehouse
A data warehouse is a repository of subject-oriented historical data that is
organized to be accessible in a form readily acceptable for analytical
processing activities (such as data mining, decision support, querying, and other
applications).
• Benefits of a data warehouse are:
– The ability to reach data quickly, since they are located in one place
– The ability to reach data easily and frequently by end users with Web
browsers.
The Data Warehouse Continued
• Characteristics of data warehousing are:
– Time variant. The data are kept for many years so they can
be used for trends, forecasting, and comparisons over time.
– Nonvolatile. Once entered into the warehouse, data are not
updated.
– Relational. Typically the data warehouse uses a relational
structure.
– Client/server. The data warehouse uses the client/server
architecture mainly to provide the end user an easy access
to its data.
– Web-based. Data warehouses are designed to provide an
efficient computing environment for Web-based applications
The Data Warehouse Continued
Conceptual Modeling
of Data Warehouses
• Modeling data warehouses: dimensions & measures
– Star schema: A fact table in the middle connected to a set of
dimension tables
– Snowflake schema: A refinement of star schema where some
dimensional hierarchy is normalized into a set of smaller
dimension tables, forming a shape similar to snowflake
– Fact constellations: Multiple fact tables share dimension tables,
viewed as a collection of stars, therefore called galaxy schema or
fact constellation
Example of Star Schema
time
item
time_key
day
day_of_the_week
month
quarter
year
Sales Fact Table
time_key
item_key
branch_key
branch
location_key
branch_key
branch_name
branch_type
units_sold
dollars_sold
avg_sales
Measures
item_key
item_name
brand
type
supplier_type
location
location_key
street
city
province_or_street
country
time
Example of Snowflake
Schema
time_key
day
day_of_the_week
month
quarter
year
item
Sales Fact Table
time_key
item_key
branch_key
branch
location_key
branch_key
branch_name
branch_type
units_sold
dollars_sold
avg_sales
Measures
item_key
item_name
brand
type
supplier_key
supplier
supplier_key
supplier_type
location
location_key
street
city_key
city
city_key
city
province_or_street
country
Example of Fact
Constellation
time
time_key
day
day_of_the_week
month
quarter
year
item
Sales Fact Table
time_key
item_key
item_name
brand
type
supplier_type
item_key
location_key
branch_key
branch_name
branch_type
units_sold
dollars_sold
avg_sales
Measures
time_key
item_key
shipper_key
from_location
branch_key
branch
Shipping Fact Table
location
to_location
location_key
street
city
province_or_street
country
dollars_cost
units_shipped
shipper
shipper_key
shipper_name
location_key
shipper_type
The Data Cube
Multidimensional databases (sometimes called OLAP) are specialized data
stores that organize facts by dimensions, such as geographical region,
product line, salesperson, time. The data in these databases are usually
preprocessed and stored in data cubes.
• One intersection might be the quantities of a product
sold by specific retail locations during certain time
periods.
• Another matrix might be Sales volume by department,
by day, by month, by year for a specific region
• Cubes provide faster:
–
–
–
–
Queries
Slices and Dices of the information
Rollups
Drill Downs
Three-Dimensional View of
Sales
Cube: A Lattice of
Cuboids
all
time
time,item
0-D(apex) cuboid
item
time,location
location
item,location
time,supplier
time,item,location
supplier
1-D cuboids
location,supplier
2-D cuboids
item,supplier
time,location,supplier
3-D cuboids
time,item,supplier
item,location,supplier
4-D(base) cuboid
time, item, location, supplier
Operational vs. Multidimensional
View of Sales
Creating a Data Warehouse
OLTP and OLAP
Transactional vs. Analytical Data Processing
Transactional processing takes place in operational systems (TPS) that
provide the organization with the capability to perform business
transactions and produce transaction reports. The data are organized
mainly in a hierarchical structure and are centrally processed. This is done
primarily for fast and efficient processing of routine, repetitive data.
A supplementary activity to transaction processing is called analytical
processing, which involves the analysis of accumulated data. Analytical
processing, sometimes referred to as business intelligence, includes data
mining, decision support systems (DSS), querying, and other analysis
activities. These analyses place strategic information in the hands of
decision makers to enhance productivity and make better decisions,
leading to greater competitive advantage.
OLTP vs. OLAP
OLTP
OLAP
users
clerk, IT professional
knowledge worker
function
day to day operations
decision support
DB design
application-oriented
subject-oriented
data
current, up-to-date
detailed, flat relational
isolated
repetitive
historical,
summarized, multidimensional
integrated, consolidated
ad-hoc
lots of scans
unit of work
read/write
index/hash on prim. key
short, simple transaction
# records accessed
tens
millions
#users
thousands
hundreds
DB size
100MB-GB
100GB-TB
metric
transaction throughput
query throughput, response
usage
access
complex query
OLAP Client/Server Architecture
OLAP Server Arrangement
OLAP Server with Multidimensional
Data Store Arrangement
OLAP Server With Local Mini
Data Marts
Data Mining: Extraction of Knowledge
From Data
Review: Data-Mining Phases
Data Preprocessing
Data Preprocessing
• Why preprocess the data?
• Data cleaning
• Data integration and transformation
• Data reduction
• Discretization and concept hierarchy generation
Why Data Preprocessing?
• Data in the real world is a mess
– incomplete: lacking attribute values, lacking certain
attributes of interest, or containing only aggregate
data
– noisy: containing errors or outliers
– inconsistent: containing discrepancies in codes or
names
• No quality data, no quality mining results
– Quality decisions must be based on quality data
– Data warehouse needs consistent integration of
quality data
Cont’d
• Just as manufacturing and refining are
about transformation of raw materials into
finished products, so too with data to be
used for data mining
• ECTL – extraction, clean, transform, load –
is the process/methodology for preparing
data for data mining
• The goal: ideal DM environment
Data Types
• Variable Measures
–
–
–
–
Categorical variables (e.g., CA, AZ, UT…)
Ordered variables (e.g., course grades)
Interval variables (e.g., temperatures)
True numeric variables (e.g., money)
• Dates & Times
• Fixed-Length Character Strings (e.g., Zip Codes)
• IDs and Keys – used for linkage to other data in other
tables
• Names (e.g., Company Names)
• Addresses
• Free Text (e.g., annotations, comments, memos, email)
• Binary Data (e.g., audio, images)
Multi-Dimensional Measure of
Data Quality
• A well-accepted multidimensional view:
–
–
–
–
–
–
–
–
Accuracy
Completeness
Consistency
Timeliness
Believability
Value added
Interpretability
Accessibility
• Broad categories:
– intrinsic, contextual, representational, and
accessibility.
Major Tasks in Data
Preprocessing
• Data cleaning
– Fill in missing values, smooth noisy data, identify or remove outliers,
and resolve inconsistencies
• Data integration
– Integration of multiple databases, data cubes, or files
• Data transformation
– Normalization and aggregation
• Data reduction
– Obtains reduced representation in volume but produces the same or
similar analytical results
• Data discretization
– Part of data reduction but with particular importance, especially for
numerical data
Forms of data
preprocessing
What the Data Should Look Like
• All data mining algorithms want their input
in tabular form – rows & columns as in a
spreadsheet or database table
i.e. Give a sample file of SPSS
What the Data Should Look Like
• Customer Signature
– Continuous “snapshot” of customer behavior
Each row represents
the customer and
whatever might be useful
for data mining
What the Data Should Look Like
• The columns
– Contain data that describe aspects of the
customer (e.g., sales $ and quantity for each
of product A, B, C)
– Contain the results of calculations referred to
as derived variables (e.g., total sales $)
What the Data Should Look Like
1.
2.
3.
4.
Columns with One Value - Often not very useful
Columns with Almost Only One Value
Columns with Unique Values
Columns Correlated with Target Variable (synonyms
with the target variable)
1.
2.
3.
Data Cleaning
• Data cleaning tasks
– Fill in missing values
– Identify outliers and smooth out noisy data
– Correct inconsistent data
Missing Data
• Data is not always available
– E.g., many tuples have no recorded value for several attributes,
such as customer income in sales data
• Missing data may be due to
– equipment malfunction
– inconsistent with other recorded data and thus deleted
– data not entered due to misunderstanding
– certain data may not be considered important at the time of
entry
– not register history or changes of the data
• Missing data may need to be inferred.
How to Handle Missing Data?
•
Ignore the tuple: usually done when class label is missing (assuming the
tasks in classification—not effective when the percentage of missing values
per attribute varies considerably.
•
Fill in the missing value manually: tedious + infeasible?
•
Use a global constant to fill in the missing value: e.g., “unknown”, a new
class?!
•
Use the attribute mean to fill in the missing value
•
Use the attribute mean for all samples belonging to the same class to fill in
the missing value: smarter
•
Use the most probable value to fill in the missing value: inference-based
such as Bayesian formula or decision tree
Noisy Data
• Noise: random error or variance in a measured variable
• Incorrect attribute values may due to
–
–
–
–
–
faulty data collection instruments
data entry problems
data transmission problems
technology limitation
inconsistency in naming convention
• Other data problems which requires data cleaning
– duplicate records
– incomplete data
– inconsistent data
How to Handle Noisy Data?
• Binning method:
– first sort data and partition into (equi-depth) bins
– then one can smooth by bin means, smooth by bin
median, smooth by bin boundaries, etc.
• Clustering
– detect and remove outliers
• Combined computer and human inspection
– detect suspicious values and check by human
• Regression
– smooth by fitting the data into regression functions
Simple Discretization Methods:
Binning
• Equal-width (distance) partitioning:
– It divides the range into N intervals of equal size: uniform grid
– if A and B are the lowest and highest values of the attribute, the
width of intervals will be: W = (B-A)/N.
– The most straightforward
– But outliers may dominate presentation
– Skewed data is not handled well.
• Equal-depth (frequency) partitioning:
– It divides the range into N intervals, each containing
approximately same number of samples
– Good data scaling
– Managing categorical attributes can be tricky.
Binning Methods for Data
Smoothing
* Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26, 28,
29, 34
* Partition into (equi-depth) bins:
- Bin 1: 4, 8, 9, 15
- Bin 2: 21, 21, 24, 25
- Bin 3: 26, 28, 29, 34
* Smoothing by bin means:
- Bin 1: 9, 9, 9, 9
- Bin 2: 23, 23, 23, 23
- Bin 3: 29, 29, 29, 29
* Smoothing by bin boundaries:
- Bin 1: 4, 4, 4, 15
- Bin 2: 21, 21, 25, 25
- Bin 3: 26, 26, 26, 34
Cluster Analysis
Regression
y
Y1
Y1’
y=x+1
X1
x
Data Integration
• Data integration:
– combines data from multiple sources into a coherent
store
• Schema integration
– integrate metadata from different sources
– Entity identification problem: identify real world entities
from multiple data sources, e.g., A.cust-id  B.cust-#
• Detecting and resolving data value conflicts
– for the same real world entity, attribute values from
different sources are different
– possible reasons: different representations, different
scales, e.g., metric vs. British units
Handling Redundant
Data in Data Integration
• Redundant data occur often when integration of multiple
databases
– The same attribute may have different names in different
databases
– One attribute may be a “derived” attribute in another table, e.g.,
annual revenue
• Redundant data may be able to be detected by
correlational analysis
• Careful integration of the data from multiple sources may
help reduce/avoid redundancies and inconsistencies and
improve mining speed and quality
Data
Transformation
• Smoothing: remove noise from data
• Aggregation: summarization, data cube construction
• Generalization: concept hierarchy climbing
• Normalization: scaled to fall within a small, specified
range
– min-max normalization
– z-score normalization
– normalization by decimal scaling
• Attribute/feature construction
– New attributes constructed from the given ones
Data Transformation:
Normalization
• min-max normalization
v  minA
v' 
(new _ maxA  new _ minA)  new _ minA
maxA  minA
• z-score normalization
v  meanA
v' 
stand _ devA
• normalization by decimal scaling
v
v'  j
10
Where j is the smallest integer such that Max(| v ' |)<1
Principal Component Analysis
• Given N data vectors from k-dimensions, find c
<= k orthogonal vectors that can be best used
to represent data
– The original data set is reduced to one consisting of N
data vectors on c principal components (reduced
dimensions)
• Each data vector is a linear combination of the c
principal component vectors
• Works for numeric data only
• Used when the number of dimensions is large
Principal Component Analysis
X2
Y1
Y2
X1
Regression and Log-Linear
Models
• Linear regression: Data are modeled to fit a straight line
– Often uses the least-square method to fit the line
• Multiple regression: allows a response variable Y to be
modeled as a linear function of multidimensional feature
vector
• Log-linear model: approximates discrete
multidimensional probability distributions
Regress Analysis and
Log-Linear Models
• Linear regression: Y =  +  X
– Two parameters ,  and  specify the line and are to
be estimated by using the data at hand.
– using the least squares criterion to the known values of
Y1, Y2, …, X1, X2, ….
• Multiple regression: Y = b0 + b1 X1 + b2 X2.
– Many nonlinear functions can be transformed into the
above.
• Log-linear models:
– The multi-way table of joint probabilities is
approximated by a product of lower-order tables.
– Probability: p(a, b, c, d) = ab acad bcd
Sampling
• Allow a mining algorithm to run in complexity that is
potentially sub-linear to the size of the data
• Choose a representative subset of the data
– Simple random sampling may have very poor performance in the
presence of skew
• Develop adaptive sampling methods
– Stratified sampling:
• Approximate the percentage of each class (or subpopulation of
interest) in the overall database
• Used in conjunction with skewed data
• Sampling may not reduce database I/Os (page at a time).
Sampling
Raw Data
Sampling
Raw Data
Cluster/Stratified Sample
References
•
Design and Implementation of Database Systems (2005), Rob
•
Michael J. A. Berry and Gordon S. Linoff (2004), Data Mining Techniques for Marketing, Sales,
and Customer Relationship Management, 2nd ed., Wiley
•
Introduction to Data Mining and Knowledge Discovery, Third Edition, ISBN: 1-892095-02-5
(Can be downloaded via website for free)
•
Tan, P., Steinbach, M., and Kumar, V. (2006) Introduction to Data Mining, 1st edition, AddisonWesley, ISBN: 0-321-32136-7.
•
Vasant Dhar and Roger Stein, Prentice-Hall (1997), Seven Methods for Transforming Corporate
Data Into Business Intelligence
•
H. Witten and E. Frank (2005), Data Mining:Practical Machine Learning Tools and Techniques,
2nd edition, Morgan Kaufmann, ISBN: 0-12-088407-0, closely tied to the WEKA software.
•
Ethem ALPAYDIN, Introduction to Machine Learning, The MIT Press, October 2004, ISBN 0-26201211-1
•
J. Han and M. Kamber (2000) Data Mining: Concepts and Techniques, Morgan Kaufmann.
Database oriente.