Web Mining - anuradhasrinivas

Download Report

Transcript Web Mining - anuradhasrinivas

Web Mining
G.Anuradha
References from Dunham
Objective
•
•
•
•
•
What is web mining?
Taxonomy of web mining?
Web content mining
Web structure mining
Web usage mining
What is web mining?
• Mining of data related to WWW
– Data present in Web pages or data related to web
activity
• Web data is classified
– Content of web pages
– Intrapage structure which include code and actual
linkage
– Usage data – how used by visitors
– User profiles
Taxonomy of Web Mining
Web Content Mining
• Extension of basic search engines
• Search engines are keyword-based
• Traditional search engines use crawlers
– to search the Web
– gather information
– indexing techniques to store the information
– query processing to provide fast and accurate
information to users
Taxonomy of Web content mining
WEB CONTENT MINING
AGENT BASED APPROACH
USE SOFTWARE SYSTEMS TO
PERFORM THE CONTENT
MINING
EG. SEARCH ENGINES
DATABASE APPROACH
VIEWS WEB DATA AS
BELONGING TO DATABASE
WEB IS A MULTILEVEL DATABASE
AND QUERY LANGUAGES ARE
USED FOR QUERYING THE DATA
CONTENT MINING IS A TYPE OF TEXT MINING
Text mining hierarchy
Keyword
Simple
Term
Association
Similarity Search
Classification and
Clustering
Natural Language processing
Complex
Crawlers
How do crawlers work?
• Robot, spider, crawler is a program that traverses
the hypertext structure in the web
• Page that the crawler starts is referred to as seed
URL
• All links from that page are recorded and saved in
a queue
• The new pages are in turn searched and their
links are saved
• The crawlers collect information about each
page, extract keywords, store indices for users
Types of crawlers
• Periodic crawlers: activated periodically; every
time it is activated it replaces the existing
index
• Incremental crawler: updates the index
incrementally instead of replacing it
• Focused crawler: visits pages related to topics
of interest
Focused crawling
Architecture of focused crawler
• Has 3 components:
– Crawler: Performs the actual crawling on the Web.
It visits pages based on priority-based structure
associated with pages by classifier and distiller
– Classifier: Associates a relevance score for each
document with respect to the crawl topic.
Determines the resource rating
– Distiller: Determines which pages contain links to
many relevant pages. These are called hub pages.
Harvest Rate
• Harvest rate is the performance objective for focused
crawler
• The seed documents are used to begin the focused
crawling
• The relevant documents are found using
– Hard focus: Follows links if there is an ancestor of that
node which is marked as good
– Soft focus: identifies the relevant page with a probability
𝑅 𝑑 = 𝑔𝑜𝑜𝑑(𝑐) 𝑃(𝑐|𝑑)
c- is a page and good(c) is an indication that the page is a
relevant page
Context focused crawler
• Crawling takes place in two phases
– Training phase: context graphs and classifiers are
constructed using a set of seed documents as training
set
– Classifiers are used for crawling and context graphs
are updated.
• Context crawler overcomes the problems of
focused crawler
– Follows links from those pages which point to relevant
pages but they themselves are not relevant
– Helps in backward crawling
Context graph
• Rooted graph in which root represents seed
document and nodes at each level represent
pages that have links to node at higher level
• Context graph created for all seed documents
are merged to create a merged context graph
Harvest system
• Based on use of caching, indexing, crawling
• Harvest is centered around the use of
– Gatherers: obtain information for indexing from
Internet Service Provider
– Brokers: provides index and query interface
– Brokers may directly or indirectly interface with
gatherers
Virtual Web View
• Large amount of unstructured data can be
handled using multiple layered
database(MLDB) on top of the web data
• Every layer of this dbase is more generalized
then the preceding layer
• The upper layer are structured and can be
accessed using SQL
• View of MLDB- Virtual Web View(VWV)
WebML
• Query language which supports data mining
operations on MLDB
• Four primitive operations in WebML are
– COVERS
– COVERED BY
– LIKE
– CLOSE TO
SELECT *
FROM document in “www.engr.smu.edu”\\
WHERE ONE OF keywords COVERS “cat”
Personalization
• Contents of a web page are modified to fit the
desires of the user
• Advertisements are sent to a potential customer
based on his specific knowledge
• Personalization is performed on target web page
• Targeting is different from personalization
– In targeting businesses display advertisements at
other sites visited by their users
– In personalization when a person visits a Web site, the
advertising can be designed specifically for that
person
Personalization Contd….
• Personalization is a combination of clustering,
classification and prediction
• Types of personalization are
– Manual techniques – user registration details
– Collaborative filtering
– Content-based filtering
• Eg. My Yahoo
Web Structure Mining
• Creating a model of the web organization
• Used to classify Web pages or to create
similarity measures between documents
Page Rank
• Designed to increase the effectiveness of
search engines and improve their efficiency
• Used to
– Measure the importance of a page
– Prioritize the pages returned from a traditional
search engine using keyword searching
• Page Rank is calculated based on the number
of pages that point to it
Page Rank Contd…
𝑃𝑅 𝑝 = 𝑐
𝑞∈𝐵𝑝
𝑃𝑅(𝑞)
𝑁𝑞
Where c between 0 to 1 used for
normalization;
Bp=Set of pages that point to p
Fp=set of links out of p
Nq=|Fq|
Rank Sink
• When there is a cyclic reference a rank sink
problem occurs
• Eliminated using an additional term cE(v) to
the page rank formula
• E(v)- is a vector that adds an artificial link.
Hyperlink-induced topic search(HITS)
• Finds hubs and authoritative pages
• HITS has two components
– Based on a given set of keywords relevant pages
are found
– Hubs and authority measures are associated with
these pages. Pages with highest values are
returned
Authorities and hubs
• The algorithm produces two types of pages:
- Authority: pages that provide an important,
trustworthy information on a given topic
- Hub: pages that contain links to authorities
• Authorities and hubs exhibit a mutually
reinforcing relationship: a better hub points to
many good authorities, and a better authority
is pointed to by many good hubs
Selime Işık-Büşra İpek
26
Authorities and hubs (2)
5
5
2
1 1
3
1
6
7
7
4
a(1) = h(2) + h(3) + h(4)
6
h(1) = a(5) + a(6) + a(7)
Selime Işık-Büşra İpek
27
Definitions
• Authority: pages that provide an important, trustworthy
information on a given topic
• Hubs: pages that contain links to authorities
• Indegree: number of incoming links to a given node, used
to measure the authoritativeness
• Outdegree: number of outgoing links from a given node,
here it is used to measure the hubness
Selime Işık-Büşra İpek
28
HITS Algorithm
• Hubs point to lots of authorities.
• Authorities are pointed to by lots of hubs.
• Together they form a bipartite graph:
• Hubs
Authorities
29
Step By Step HITS-1
• determines a base set S
• let set of documents returned by a standard
search engine be called the root set R
• Initialize S to R
Selime Işık-Büşra İpek
30
Step By Step HITS - 2
 Add to S all pages pointed to by any page in R.
 Add to S all pages that point to any page in R
 Maintain for each page p in S:
Authority score: ap (vector a)
Hub score: hp (vector h)
Selime Işık-Büşra İpek
31
Step By Step HITS - 3
• For each node initiliaze the ap and hp to 1/n
• In each iteration calculate the authority
weight for each node in S
Selime Işık-Büşra İpek
32
Step By Step HITS - 4
• In each iteration calculate the hub weight for
each node in S
• Note: The hub weights are computed from the current authority
weights, which were computed from the previous hub weights.
Selime Işık-Büşra İpek
33
Step By Step HITS - 5
• After new weights are computed for all nodes,
the weights are normalized:
Selime Işık-Büşra İpek
34
The Pseudocode of HITS
Selime Işık-Büşra İpek
37
HITS Example
• Root Set R {1,2,3,4}
• Extend it to form the base set S
38
Selime Işık-Büşra İpek
HITS Example Results
Authority
Hubness
• Authority and Hubness
Weight
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Selime Işık-Büşra İpek
39
HITS vs PageRank
• HITS emphasizes mutual reinforcement between authority
and hub webpages, while PageRank does not attempt to
capture the distinction between hubs and authorities. It ranks
pages just by authority.
• HITS is applied to the local neighborhood of pages
surrounding the results of a query whereas PageRank is
applied to the entire web
• HITS is query dependent but PageRank is query-independent
Selime Işık-Büşra İpek
40
HITS vs PageRank (2)
• Both HITS and PageRank correspond to matrix
computations.
• Both can be unstable: changing a few links can
lead to quite different rankings.
• PageRank doesn't handle pages with no
outedges very well, because they decrease the
PageRank overall
Selime Işık-Büşra İpek
41
Conclusion
• HITS is a general algorithm used for calculating
the authority and hubs in order to rank the
retrieved data
• The basic aim of that algorithm is to induce
the Web graph by finding set of pages with a
search on a given topic (query).
Selime Işık-Büşra İpek
42
INPUT
W
///WWW viewed as a directed graph
q
//Query
s
//support
OUTPUT
A
//Set of authority pages
H
//Set of hub paged
HITS Algorithm
R=SE(W,q)
//SEARCH ENGINE SE IS USED TO FIND A SMALL SET ROOT R
B=RU{pages linked to from R}U{pages that link to pages in R};
G(B,L)=Subgraph of W induced in B;//B –vertices or pages in G and L is links
G(B,L1)=Delete links in G within same site;
Xp=∑yq //authority weights
Yp= ∑xp //hub weights
A={p|p has one of the higest xp};
H={p|p has one of the highest yp};
Web usage mining
• Mining on web usage data, or web logs
• Web log is a listing of page reference data
(clickstream data)
• Logs are examined at client or server perspective
– Server perspective-mining uncovers information
about the sites where the server resides
– Client perspective- information about a user is
detected
• Aids in personalization
Web usage mining applications
• Personalization for a user
• From frequent access behavior of user, overall
performance can be improved
• Caching of frequently accessed pages
• Modifications of linkage structure, common
access behavior are accessed.
• Gather business intelligence to improve sales
and advertisements
Issues related with web log
• Identification of exact user is not possible
from log
• With web client cache, sequence of pages a
user visits is difficult to uncover from server
site
• Legal, privacy and security issues to be
resolved
Preprocessing
• The preprocessing phase includes
– cleansing
– User identification
– Session identification
– Path completion
– Formatting
What is log?
• Log ={(u1,p1,t1),….,(un,pn,tn)}
• 𝑢𝑖 ∈ 𝑈, 𝑝𝑖 ∈ 𝑃, 𝑡𝑖 𝑖𝑠 𝑡𝑖𝑚𝑒𝑠𝑡𝑎𝑚𝑝
• Ppages; UUsers;
What is session?
• Ordered list of pages accessed by a user
{<p1,t1>,,p2,t2>….<pn,tn>}
• Each session has a unique identifier called as
session ID.
• The length of session is number of pages in it
denoted by len(S)
• D be a database having all sessions and length
of D is total len(S)
Recap of networking
•
•
•
•
What is ISP?
Internet Service Provider
What are cookies?
Cookies are used in identifying a single user
regardless of machine used to access the WEB
Trie
• Data structure that is used to keep track of
patterns during web usage mining
• Path from root to leaf represents a sequence
• Tries are used to store strings fro patternmatching applications
• Each character in the string is stored on the
edge to the node and common prefixes of
strings are shared
Sample tries
A
C
N
A
Y
R
$
TRIE
C
A
R
A
N
Y
T
$
SUFFIX
TRIE
C
A
R
T
Characteristics of suffix trie
• Each internal node except the rot has atleast
two children
• Each edge represents a nonempty
subsequence
• Subsequences begin with different symbols
• Suffix tree build for multiple sessions is called
a generalized suffix tree (GST)
Pattern Discovery
• For clickstream data the common DM technique is
uncovering traversal pattern
• Traversal pattern is a set of pages visited by a user in a
session
• There are different traversal patterns having the
following features
– Duplicate page references
– Pattern may have contiguous page references or pages
referenced in the same session
– A pattern may or may not be maximal
– Frequent pattern may or may not be maximal if it has no
subpattern that is also frequent
Association rules
• Can be used to find what pages are accessed
together
• In this case a page is regarded as an item and
a session is regarded as a transaction with
duplicates and ordering ignored
• Support=No: of occurrences of itemset
-------------------------------------------------------------
No. of transactions or sessions
Sequential Patterns
• Sequential pattern is an ordered set of pages
that satisfies a given support and is maximal
• Support is the percentage of customers who
have the pattern
• Users can span many sessions, hence
sequential patterns can also span many
sessions
Algorithm to find sequential patterns
INPUT
D={S1,S2,…,Sk} //Database of sessions
s
//Support
Output :Sequential patterns
Sequential pattern algorithm:
D=sort D on user-id and tie of first page
reference in each session;
Find L1 in D;
L=ApprioriAll(D,s,L1);
Find maximal reference sequences from L;
The Apriori Property of Sequential Patterns
• A basic property: Apriori (Agrawal & Sirkant’94)
– If a sequence S is not frequent, then none of the
super-sequences of S is frequent
– E.g, <hb> is infrequent so do <hab> and <(ah)b>
Seq. ID
10
20
30
40
50
Sequence
<(bd)cb(ac)>
<(bf)(ce)b(fg)>
<(ah)(bf)abf>
<(be)(ce)d>
<a(bd)bcb(ade)>
Given support threshold
min_sup =2
58
GSP—Generalized Sequential Pattern Mining
• GSP (Generalized Sequential Pattern) mining
algorithm
• Outline of the method
– Initially, every item in DB is a candidate of length-1
– for each level (i.e., sequences of length-k) do
• scan database to collect support count for each candidate
sequence
• generate candidate length-(k+1) sequences from length-k frequent
sequences using Apriori
– repeat until no frequent sequence or no candidate can be
found
• Major strength: Candidate pruning by Apriori
59
Finding Length-1 Sequential Patterns
• Initial candidates:
– <a>, <b>, <c>, <d>, <e>, <f>, <g>, <h>
• Scan database once, count support for
candidates
min_sup =2
Seq. ID
10
20
30
40
50
Sequence
<(bd)cb(ac)>
<(bf)(ce)b(fg)>
<(ah)(bf)abf>
<(be)(ce)d>
<a(bd)bcb(ade)>
Cand Sup
<a>
3
<b>
5
<c>
4
<d>
3
<e>
3
<f>
2
<g>
1
<h>
1
60
Generating Length-2 Candidates
51 length-2
Candidates
<a>
<a>
<b>
<c>
<d>
<e>
<f>
<a>
<b>
<c>
<d>
<e>
<f>
<a>
<aa>
<ab>
<ac>
<ad>
<ae>
<af>
<b>
<ba>
<bb>
<bc>
<bd>
<be>
<bf>
<c>
<ca>
<cb>
<cc>
<cd>
<ce>
<cf>
<d>
<da>
<db>
<dc>
<dd>
<de>
<df>
<e>
<ea>
<eb>
<ec>
<ed>
<ee>
<ef>
<f>
<fa>
<fb>
<fc>
<fd>
<fe>
<ff>
<b>
<c>
<d>
<e>
<f>
<(ab)>
<(ac)>
<(ad)>
<(ae)>
<(af)>
<(bc)>
<(bd)>
<(be)>
<(bf)>
<(cd)>
<(ce)>
<(cf)>
<(de)>
<(df)>
<(ef)>
Without Apriori
property,
8*8+8*7/2=92
candidates
Apriori prunes
61
44.57% candidates
Finding Lenth-2 Sequential Patterns
• Scan database one more time, collect support count
for each length-2 candidate
• There are 19 length-2 candidates which pass the
minimum support threshold
– They are length-2 sequential patterns
62
The GSP Mining Process
5th scan: 1 cand. 1 length-5 seq.
pat.
Cand. cannot pass
sup. threshold
<(bd)cba>
Cand. not in DB at all
4th scan: 8 cand. 6 length-4 seq. <abba> <(bd)bc> …
pat.
3rd scan: 46 cand. 19 length-3 seq. <abb> <aab> <aba> <baa> <bab> …
pat. 20 cand. not in DB at all
2nd scan: 51 cand. 19 length-2 seq.
<aa> <ab> … <af> <ba> <bb> … <ff> <(ab)> … <(ef)>
pat. 10 cand. not in DB at all
1st scan: 8 cand. 6 length-1 seq.
<a> <b> <c> <d> <e> <f> <g> <h>
pat.
Seq. ID
Sequence
min_sup =2
10
<(bd)cb(ac)>
20
<(bf)(ce)b(fg)>
30
<(ah)(bf)abf>
40
<(be)(ce)d>
50
<a(bd)bcb(ade)>
63
The GSP Algorithm
• Take sequences in form of <x> as length-1 candidates
• Scan database once, find F1, the set of length-1
sequential patterns
• Let k=1; while Fk is not empty do
– Form Ck+1, the set of length-(k+1) candidates from Fk;
– If Ck+1 is not empty, scan database once, find Fk+1, the set of
length-(k+1) sequential patterns
– Let k=k+1;
64
The GSP Algorithm
• Benefits from the Apriori pruning
– Reduces search space
• Bottlenecks
– Scans the database multiple times
– Generates a huge set of candidate sequences
There is a need for more
efficient mining methods
65