Transcript Slide

Socialbots and its implication On ONLINE
SOCIAL Networks
Md Abdul Alim, Xiang Li and Tianyi Pan
Group 18
Outline
 Overview of socialbot
 How socialbots spreads dangers
 Impacts of socialbots
 Infiltration mechanism: a case study
 Socialbots Detection
2
Overview
 A socialbot is a
piece of software
that controls a user
account in an online
social network and
passes itself of as a
human being
3
The dangers of socialbots
 Harvest private user
data
 Socialbots can be
used
to
collect
organizational data
 Online surveillance
 Profiling
 Data
commoditization
4
Contd.
 Spread misinformation
 OSNs are attractive
medium for abusive
content and Socialbots
take advantage of it




Propagate propaganda
Political astroturfing
Bias public opinion
Influence user
perception
5
Contd.
 Malware infection
 Infect
computers
and use it for DDoS
 Social spamming
 Fraudulent
activities
6
Impact of socialbots
 OSNs are growing source of income
advertisers, investors, developers
for
 Inaccurate representation of actual users in OSNs
severely impact the revenue of dependent businesses
Boshmaf et. al (2011) showed that Facebook
can be infiltrated by socialbots sending friend
requests. Average reported acceptance rate:
35.7% up to 80% depending on how many
mutual friends the social bots had with the
infiltrated users
7
Impact of socialbots (contd.)
8
Socialbots: a case study
 Elyashar et al. (2013) performed a social study for
infiltrating
specific
users
in
targeted
organizations using socialbots
 Technology oriented organizations were chosen
to emphasize the vulnerability of users in OSNs
 Employees of these organization should be more
aware of the dangers of exposing private information
 An infiltration is defined as accepting a Socialbot's friend
request. Upon accepting a Socialbot's friend request,
users unknowingly expose information about themselves
and their workplace which leads to security compromise
9
Socialbot: infiltration mechanism
 OSN: Facebook
 Target Organization: 3 [selected by the authors, not
disclosed]
 Targeted users: 10
 Socialbot: one socialbot per organization
 Idea is to send friend requests to all specific users'
mutual friends who worked or work in the same targeted
organization. The rationale behind this idea was to gain
as many mutual friends as possible and through this act
increase the probability that our friend requests will be
accepted by the targeted users.
10
Steps: infiltration mechanism
1. Step1:
crawl on targeted organizations to gather public information
regarding its employees who have a Facebook user account
and declared that they work or worked in the targeted
organizations
2. Step2:
Choose 10 users randomly to be a target for infiltration
3. Step3:
Increase credibility of the socialbot: Send friend request to
random users each of them having more than 1000 friend
regardless of organization.
4. Step4:
After socialbot has 50 friends, send friend request to targeted
users’ mutual friends
11
Algorithm: infiltration mechanism
12
Result of the study



Socialbot 1 in Organization 1 succeeded to accumulate
50% of the targeted users
Socialbot 2 in Organization 2 succeeded to accumulate
70% of the targeted users
Results for two organization
How to detect the socialbots?
13
Socialbot Detection
14
Existing Detection Methods
 Feature-based detection
15
Feature-based Detection
 Relies on user-level activities and its account
details
 Uses machine learning techniques to classify
accounts (fake or real)
 For the attacker: relatively easy to circumvent
 Mimic real users!
 Only 20% of fake accounts are detected by this
method. (Boshmaf et. al 2011)
16
Existing Detection Methods
 Feature-based detection
 Graph-based detection
17
Graph-based Detection
 Rank nodes based on landing probability of
short random walks, started from trusted nodes.
18
Graph-based Detection
 Perform cut based on node ranking
19
Graph-based Detection
 Assumption: social infiltration on a large scale is
infeasible Not always true!
(Pic from Boshmaf et. al 2011)
20
Graph-based Detection
21
Solution: Integro (Boshmaf et. al 2015 )
 Find potential victims
 Machine learning method (random forests)
 Assign each node a probability of being a victim
 Create weighted graph & choose trusted nodes
 Decide edge weights based on their incident nodes’
victim probability
 The higher the probability, the lower the weight
 Community based trusted nodes selection
 Rank nodes based on short random walks in the
weighted graph
22
Integro
23
Integro
24
Integro
25
Find Potential Victims
 Random Forest Learning method
 Decision tree based learning
 Separate the dataset to subsets and use a decision
tree for each dataset
 Cross-validation method
 Chop the dataset into 10 equally sized sets
 RF method on 9 sets
 Use the remaining one for testing
26
Create Weighted Graph & Choose Trusted Nodes
 Assign weight based on victim probability
 Choose trusted nodes
 Detect communities by the Louvain method
 Randomly pick a small set of nodes from each
community
 Manual verification of the selected nodes
27
Rank Nodes Based on Short Random Walks
 Trust propagation process
 Stop after log 𝑛 rounds
 Rank nodes by
descending order
in
28
Experiments
 Datasets
 Labeled feature vectors (for learning)
 8.8K public Facebook profiles (32% victims)
 60K full Tuenti profiles (50% victims)
 Graph samples (for detection)
 Snapshot of Tuenti’s daily active user graph on Feb. 6
2014
29
Feature Vector
30
Experiment Results
 Precision (In Tuenti)
31
Experiment Results
 Scalability (In small-world graphs)
RF
Ranking
32
What else can be done?
 Stop fake accounts at the time they are created?
 Fake accounts send random friend requests at the
time they are created
 It is abnormal when the friends of a real person all
belong to different communities
 Methods other than random walk to cut the
graph?
 Current random walk method is limited to
undirected graphs
33
Questions?
34
Thank you!
35