OneLab - Thomas Bourgeau`s Homepage

Download Report

Transcript OneLab - Thomas Bourgeau`s Homepage

OneLab - PlanetLab Europe
Presentation for Rescom 2007
Serge Fdida
Thomas Bourgeau
Laboratoire LIP6 – CNRS
Université Pierre et Marie Curie – Paris 6
http://www.lip6.fr/rp
The OneLab Project
www.one-lab.org
• A Europe-wide project
–
European Commission funding under the FP6 funding program
• STREP : IST-2006-034819
–
A STREP (Specific Targeted Research Project) funded by the European
Commission
–
start: September 2006, duration: 2 years
–
Funding 1.9 M€ , Total budget : 2.86 M€
• Aims
–
Extend, deepen, and federate PlanetLab
1
The OneLab Consortium
• Project leader
–
Université Pierre et Marie Curie (France)
• Technical direction
–
INRIA (France)
• Partners
–
Universitad Carlos III de Madrid (Spain)
–
Université Catholique de Louvain (Belgium)
–
Università di Napoli (Italia)
–
Università di Pisa (Italia)
–
Alcatel Italia (Italia)
–
Quantavis (Italia)
–
Telekomunikacja Polska (Polska)
–
France Telecom (France)
2
OneLab Goals
• OneLab is a concrete path towards Experimental Facilities
–
Based on Planetlab
• OneLab will help us better understand federation, which will be key to Experimental
Facility success
• OneLab will also make considerable progress in
• Extending
–
Extend PlanetLab into new environments, beyond the traditional wired internet.
• Deepening
–
Deepen PlanetLab’s monitoring capabilities.
• Federating
–
Provide a European administration for PlanetLab nodes in Europe.
3
Outline
• PlanetLab
• OneLab
• PlanetLab Europe
• Practice
4
PlanetLab
• An open platform for:
–
testing overlays,
–
deploying experimental services,
–
deploying commercial services,
–
developing the next generation of internet technologies.
• A set of virtual machines
–
–
distributed virtualization
each of 350+ network services runs in its own slice
5
PlanetLab nodes
Single PLC
located at
Princeton
• 784 machines spanning 382 sites and 35+ countries
• nodes within a LAN-hop of 2M+ users
• Administration at Princeton University
• Prof. Larry Peterson, six full-time systems administrators
6
Usage Stats
•
Slices: 350 - 425
•
AS peers: 6000
•
Users: 1028
•
Bytes-per-day: 2 - 4 TB
–
Coral CDN represents about half of this
•
IP-flows-per-day: 190M
•
Unique IP-addrs-per-day: 1M
•
Experiments on PlanetLab figure in many papers at major networking
conferences
7
Slices
8
Slices
9
Slices
10
Slices
11
User Opt-in
Client
NAT
12
Server
Per-Node View
Node
Local
Manager Admin
VM1
VM2
…
VMn
Virtual Machine Monitor (VMM)
VMM: Currently Linux with vserver extensions
Could eventually be Xen
13
Per-Node View
Local
Node
Manager Admin
VM1
VM2
…
VMn
Virtual Machine Monitor (VMM)
Kernel
Hardware
14
Per-Node Mechanisms
SliverMgr
Proper
Node
Mgr
Owner
VM
VM1
VM2
…
VMn
PlanetFlow
SliceStat
pl_scs
pl_mom
Virtual Machine Monitor (VMM)
Linux kernel (Fedora Core)
+ Vservers (namespace isolation)
+ Schedulers (performance isolation)
+ VNET (network virtualization)
15
PlanetLab
16
PlanetLab
17
Architecture (1)
• Node Operating System
–
isolate slices
–
audit behavior
• PlanetLab Central (PLC)
–
remotely manage nodes
–
bootstrap service to instantiate and control slices
• Third-party Infrastructure Services
–
monitor slice/node health
–
discover available resources
–
create and configure a slice
–
resource allocation
18
Architecture (2)
PlanetLab
Nodes
Service
Developers
Owner 1
Create slices
Slice
Authority
Owner 2
Management
Authority
...
Software updates
USERS
...
Auditing data
New slice ID
Identify
slice users
(resolve abuse)
Learn about
nodes
Owner 3
Request a slice
Owner N
Access slice
19
Architecture (3)
MA
Node
Owner
Owner
VM
NM +
VMM
Node
SCS
slice
database
node
database
VM
SA
20
Service
Developer
Requirements
1) Global platform that supports both short-term
experiments and long-running services.
–
–
services must be isolated from each other
•
performance isolation
•
name space isolation
multiple services must run concurrently
Distributed Virtualization
–
each service runs in its own slice: a set of
VMs
21
Requirements
2) Must convince sites to host nodes
running code written by unknown
researchers.
–
protect the Internet from PlanetLab
Chain of Responsibility
–
explicit notion of responsibility
–
trace network activity to responsible party
22
Requirements
3) Federation
–
universal agreement on minimal core
(narrow waist)
–
allow independent pieces to evolve
independently
–
identify principals and trust relationships
among them
23
Trust Relationships
Princeton
Berkeley
princeton_codeen
Washington
harvard_ice
MIT
hplabs_donutlab
Brown
paris6_landmarks
CMU
NYU
ETH
Trusted
Intermediary
NxN
(PLC)
mit_dht
mcgill_card
huji_ender
Harvard
arizona_stork
HP Labs
ucb_bamboo
Intel
ucsd_share
NEC Labs
umd_scriptroute
…
…
24
Trust Relationships (cont)
2
4
Node
Owner
PLC
3
1
Service
Developer
(User)
1) PLC expresses trust in a user by issuing it credentials to access a slice
2) Users trust to create slices on their behalf and inspect credentials
3) Owner trusts PLC to vnet users and map network activity to right user
4) PLC trusts owner to keep nodes physically secure
25
Trust Relationships (cont)
4
Node
Owner
6
2
Mgmt
Authority
3
Slice
Authority
5
1
Service
Developer
(User)
1) PLC expresses trust in a user by issuing credentials to access a slice
2) Users trust to create slices on their behalf and inspect credentials
3) Owner trusts PLC to vet users and map network activity to right user
4) PLC trusts owner to keep nodes physically secure
5) MA trusts SA to reliably map slices to users
6) SA trusts MA to provide working VMs
26
VMM
• Linux
–
significant mind-share
• Vserver
–
scales to hundreds of VMs per node (12MB each)
• Scheduling
–
CPU
•
–
link bandwidth
•
•
•
–
fair share per slice
average rate limit: 1.5Mbps (24-hour bucket size)
peak rate limit: set by each site (100Mbps default)
disk
•
–
fair share per slice (guarantees possible)
5GB quota per slice (limit run-away log files)
memory
•
no limit
27
VMM (cont)
• VNET
–
socket programs “just work”
•
–
slices should be able to send only…
•
•
–
•
packets related to connections that they initiated (e.g., replies)
packets destined for bound ports (e.g., server requests)
essentially a switching firewall for sockets
•
–
well-formed IP packets
to non-blacklisted hosts
slices should be able to receive only…
•
–
including raw sockets
leverages Linux's built-in connection tracking modules
also supports virtual devices
•
•
standard PF_PACKET behavior
used to connect to a “virtual ISP”
28
Long-Running Services
• Content Distribution
–
CoDeeN: Princeton (serving > 1 TB of data per day)
–
Coral CDN: NYU
–
Cobweb: Cornell
• Internet Measurement
–
ScriptRoute: Washington, Maryland
• Anomaly Detection & Fault Diagnosis
–
PIER: Berkeley, Intel
–
PlanetSeer: Princeton
• DHT
–
Bamboo (OpenDHT): Berkeley, Intel
–
Chord (DHash): MIT
29
Services (cont)
•
•
•
•
Routing
–
i3: Berkeley
–
Virtual ISP: Princeton
DNS
–
CoDNS: Princeton
–
CoDoNs: Cornell
Storage & Large File Transfer
–
LOCI: Tennessee
–
CoBlitz: Princeton
–
Shark: NYU
Multicast
–
End System Multicast: CMU
–
Tmesh: Michigan
30
Node Manager
• SliverMgr
–
creates VM and sets resource allocations
–
interacts with…
•
bootstrap slice creation service (pl_scs)
•
third-party slice creation & brokerage services (using tickets)
• Proper: PRivileged OPERations
–
grants unprivileged slices access to privileged info
–
effectively “pokes holes” in the namespace isolation
–
examples
•
files: open, get/set flags
•
directories: mount/unmount
•
sockets: create/bind
•
processes: fork/wait/kill
31
Auditing & Monitoring
• PlanetFlow
–
logs every outbound IP flow on every node
•
accesses ulogd via Proper
•
retrieves packet headers, timestamps, context ids (batched)
–
used to audit traffic
–
aggregated and archived at PLC
• SliceStat
–
has access to kernel-level / system-wide information
•
accesses /proc via Proper
–
used by global monitoring services
–
used to performance debug services
32
Infrastructure Services
• Brokerage Services
–
Sirius: Georgia
–
Bellagio: UCSD, Harvard, Intel
–
Tycoon: HP
• Environment Services
–
Stork: Arizona
–
Application Manager: MIT
• Monitoring/Discovery Services
–
CoMon: Princeton
–
PsEPR: Intel
–
SWORD: Berkeley
–
IrisLog: Intel
33
Outline
• PlanetLab
• OneLab
• PlanetLab Europe
• Practice
34
OneLab Goals
• Extend
–
Extend PlanetLab into new environments, beyond the traditional wired
internet.
• Deepen
–
Deepen PlanetLab’s monitoring capabilities.
• Federate
–
Provide a European administration for PlanetLab nodes in Europe.
35
OneLab Workpackages
•
WP0 Management (UPMC)
•
WP1 Operations (UPMC, with INRIA, FT, ALA, TP)
•
WP2 Integration (INRIA, with UPMC)
•
WP3 Monitoring (IRC lead)
•
–
WP3A Passive monitoring (IRC)
–
WP3B Topology monitoring (UPMC)
WP4 New Environments (FT lead)
–
WP4A WiMAX component (UCL)
–
WP4B UMTS component (UniNa, with ALA)
–
WP4C Multihomed component (UC3M, with IRC)
–
WP4D Wireless ad hoc component (FT, with TP)
–
WP4E Emulation component (UniPi, with UPMC, INRIA)
•
WP5 Validation (UPMC, with all partners)
•
WP6 Dissemination (UniNa, with all partners save TP)
36
PlanetLab Today
- A set of end-hosts
- A limited view of the
underlying network
- Built on the wired
internet
37
OneLab Vision for PlanetLab
- Reveal the underlying
network
- Extend into new wired
and wireless environments
38
Goal: Extend
39
Why Extend PlanetLab?
• Problem: PlanetLab nodes are connected to the traditional wired
internet.
–
They are mostly connected to high-performance networks such as
Abilene, DANTE, NRENs.
–
These are not representative of the internet as a whole.
–
PlanetLab does not provide access to emerging environments.
40
OneLab’s New Environments
• WiMAX (Université Catholique de Louvain)
–
–
Install two nodes connected via a commercial
WiMAX provider
Nodes on trucks (constrained mobility)
• UMTS (Università di Napoli, Alcatel Italia)
–
Nodes on a UMTS micro-cell run by Alcatel Italia
• Wireless ad hoc networks (France Telecom at
Lannion)
–
Nodes in a Wi-Fi mesh network (like ORBIT)
41
OneLab’s New Environments
• Emulated (Università di Pisa)
–
–
For emerging wireless technologies
Based on dummynet
• Multihomed (Universidad Carlos III de Madrid)
42
Progress on Extension
• Added wireless capabilities to the kernel
–
Will enable nodes to attach via: WiMAX, UMTS, WiFi
• Implementing SHIM-6 multihoming
–
Nodes connected via IPv6 will be able to choose
their paths
• Incorporating Wi-Fi emulation into dummynet
–
Will allow experimentation in scenarios where
deployment is difficult (other wireless technologies
to follow)
43
Goal: Deepen
Expose the underlying network
44
Why Deepen PlanetLab?
• Problem: PlanetLab provides limited facilities
to make applications aware of the underlying
network
–
PlanetLab consists of end-hosts
–
Routing between nodes is controlled by the internet
(This will change with GENI)
–
Applications must currently make their own
measurements
45
OneLab Monitoring Components
• Passive monitoring (Intel Research Cambridge)
–
Track packets at the routers
–
Use CoMo boxes placed within DANTE
• Active monitoring (U. P. & M. Curie)
–
Provide a view of the route structure
–
Increase the scalability of wide distributed traceroute (traceroute@home)
–
Reduce traceroute deficiencies on load balanced path (paris traceroute)
–
BGP guided probing
46
Progress on Deepening
• CoMo is now OneLab-aware, has better scripting
–
CoMo allows one to write scripts to track one’s own packets
as they pass measurement boxes within the network
• Deploying traceroute@home, a distributed topologytracing system
–
Made fundamental improvements to traceroute to correct
errors introduced by network load balancing (new tool: Paris
traceroute)
47
Goal: Federate
Before:
a homogeneous
system
48
Goal: Federate
After: a heterogeneous set of systems
49
Goal: Federate
PlanetLab
PlanetLab
50
Federation
51
Why Federate PlanetLab?
Problem: Changes to PlanetLab can come only through
the administration at Princeton.
PlanetLab in the US is necessarily focussed on US funding
agencies’ research priorities.
•
What if we want to study a particular wireless technology,
and this requires changes to the source code?
•
What if we wish to change the cost structure for small and
medium size enterprises (currently $25,000/yr.)?
52
OneLab and Federation
• OneLab will create a PlanetLab Europe.
–
It will federate with PlanetLab in the US, Japan, and elsewhere.
•
–
Eventually federate with “Private PlanetLabs” as well
The federated structure will allow:
•
PlanetLab Europe to set policy in accordance with European
research priorities,
•
PlanetLab Europe to customize the platform, so long as a common
interface is preserved.
53
How to federate?
• A Private PlanetLab might have a rare
resource
–
–
e.g., a node behind a wireless link
What are the right incentives to…
•
•
encourage the Private PlanetLab to share the resource
discourage over-subscription by other users?
• A Private PlanetLabs might wish to customize
the source code
–
What are the right abstractions (APIs) that will allow
both…
•
•
inter-operability
flexibility?
54
Federation
Site A1
PLC A
(PlanetLab Central)
Site B1
PLC B
(PlanetLab Europe)
Site A2
Site B2
• We have two PlanetLab Central with sites attached
• PLC A is responsible for green sites and PLC B is responsible for
red sites
• Without federation, all sites of one PLC can only see sites and
nodes that are connected to their PLC
55
Federation
Site A1
Site B1
PLC A
(PlanetLab Central)
PLC B
(PlanetLab Europe)
Site A2
Site
Site B2
B2
• With Federation, all sites will be able to create slices on
sites connected to the federated PLC
• Now, if we federate B and C, B will be able to see sites
of A, B and C whereas C (A) will only be able to see
sites of B and C (A and B)
56
Deployment of the Federation
Before OneLab
All other
Sites
Other European
Sites
UPMC Site
OneLab Partners
Sites
PlanetLab Central
(www.planet-lab.org)
INRIA Site
Federation Test
(boot.one-lab.org)
• All sites are connected to the PlanetLab Central at
Princeton
• There is no federation
57
Deployment of the Federation
Next steps
All other
Sites
UPMC Site
PlanetLab Central
(www.planet-lab.org)
OneLab Partners
Sites
PlanetLab Europe
(www.planet-lab.eu)
Other European
Sites
INRIA Site
• Federate PlanetLab Europe with PlanetLab Central
• We have installed the future PlanetLab Europe
• Federate Private OneLab with PlanetLab Europe
• Move our nodes to PlanetLab Europe
–
We should have at least 20 nodes on PlanetLab Europe to become
serious
58
Deployment of the Federation
Final goal
All other
Sites
UPMC Site
PlanetLab Central
(www.planet-lab.org)
OneLab Partners
Sites
PlanetLab Europe
(www.planet-lab.eu)
Other European
Sites
INRIA Site
• Find a way to move other European sites on PlanetLab Europe
– All newly created European sites are connected to PlanetLab
Europe?
– Force all European sites to be connected to PlanetLab Europe
at some point?
59
The Path to Federation
• Joint Access
–
Administrators in Paris log in to machines at
Princeton
• Management Authority
–
PlanetLab machines in Europe boot from Paris
rather than from Princeton
• Slice Authority
–
Paris can create slices across PlanetLab,
becoming a second global slice authority, alongside
Princeton
• Peering
–
With an agreed interface, Paris and Princeton code
bases can diverge
60
Some Federation Issues
• A Private PlanetLab might have a rare resource
– e.g., a node behind a wireless link
– What are the right incentives to…
• encourage the Private PlanetLab to share the
resource
• discourage over-subscription by other users?
• A Private PlanetLabs might wish to customize the
source code
– What are the right abstractions (APIs) that will allow
both…
• inter-operability
• flexibility?
61
Progress on Federation
• Jointly developed PlanetLab v4 with Princeton
–
Allows two or more PLCs (PlanetLab Centrals)
–
Each PLC maintains its own node database, receives updates
from other PLCs regarding their nodes
–
Users can create a slice across the whole system by
requesting it through their own PLC
• Running an embryonic PlanetLab Europe
–
Peered with the Princeton PLC
62
OneLab Operation and research
• OneLab:
–
Research project
–
OneLab Private: private experimental PlanetLab for the
project members
• PlanetLab Europe:
–
European PLC administered by OneLab
–
Embryonic production testbed for Europe
–
Federated with PLCs in USA and elsewhere
63
Outline
• PlanetLab
• OneLab
• PlanetLab Europe
• Practice
64
PlanetLab Europe
•PlanetLab Europe
Run by UPMC
– https://www.planet-lab.eu
– [email protected]
– Federation with Princeton has been tested
once and will be permanent soon
–
65
Objectives
•Set up a functional PlanetLab Central in
Europe to manage European sites
•Create a Private PlanetLab testbed to
deploy and test our new components
•Create a federation between PlanetLab
Europe and PlanetLab Central at
Princeton
66
Resource allocation and
provisioning
•Problem
–
Many PlanetLab nodes are down or
congested
•Needed
–
Incentives for infrastructure/ressource
contributions (provisioning)
•Question
–
How to allocate ressources in case of
congestion?
67
Resource allocation and
provisioning
•Existing solutions
Simple rules like those existing on PlanetLab
– Complicated virtual market
– Simple rules that relate contributions to
allocations
–
Panayotis Antoniadis (expert on game theory and
incentive mechanisms)
• Joint work with IST project of GRID Econ
•
68
Component - server side
•Database schema
•Web UI
•API - xmlrpc
•Boot server
•Netflow
•Build, monitoring,
support
69
Component - node side
•Boot CD
•Boot Manager
•PlanetLab kernel
•vnet
•vserver
•netflow
70
Entities in the DB
• Sites
• Persons
• Keys
• Nodes
• Slices
• Slice attributes
• Implementation details
(addresses, PDU,…)
71
Joining PlanetLab Europe
72
Joining PlanetLab Europe
73
Joining PlanetLab Europe
74
PlanetLab Europe
Site creation
•How to join?
–
Just connect to https://planetlab.eu or https://private.onelab.org and fill in the “site
registration” form
75
PlanetLab Europe
Site creation
• Warning: there are fields that should be unique across all federated
PLC
– http://www.one-lab.org/wiki/view/OnelabCoordination
/PrivateDeploymentHowto
– Node's hostname, user's email, site's login base and slice's
name must be unique!!
76
PlanetLab Europe
Node creation
77
Node Boot/Install
Node
Boot Manager
PLC (MA) Boot Server
1. Boots from BootCD
(Linux loaded)
2. Hardware initialized
3. Read network config
. from floppy
4. Contact PLC (MA)
6. Execute boot mgr
5. Send boot manager
7. Node key read into memory from floppy
8. Invoke Boot API
9. Verify node key, send
current node state
10. State = “install”, run installer
11. Update node state via Boot API
13. Chain-boot node (no restart)
14. Node booted
78
12. Verify node key,
change state to “boot”
Chain of Responsibility
Join Request
PI submits Consortium paperwork and requests to join
PI Activated
PLC verifies PI, activates account, enables site (logged)
User Activated
Users create accounts with keys, PI activates accounts (logged)
Slice Created
PI creates slice and assigns users to it (logged)
Nodes Added to
Slices
Slice Traffic
Logged
Traffic Logs
Centrally Stored
Users add nodes to their slice (logged)
Experiments generate traffic (logged by PlanetFlow)
PLC periodically pulls traffic logs from nodes
Network Activity
Slice
79
Responsible Users & PI
MyPLC design
•
•
•
•
Database server
–
Primary information store
–
Nodes, users, slices
API server
–
Database frontend
–
Programatic interface
Web server
–
API frontend
–
Administration
Boot server
–
•
Software distribution
Node
–
PlanetLab kernel
–
Node Manager
80
Outline
• PlanetLab
• OneLab
• PlanetLab Europe
• Practice
81
New User
•
•
•
In order to login into PlanetLab computers, you should first register into the
PlanetLab Europe joining users page (select your site, email, status)
–
https://www.planet-lab.eu:443/db/persons/register.php
–
The PI of your site will confirm your account.
Create a SSH private/public key pair, use the ssh-keygen program
–
ssh-keygen -t rsa
–
A private key named id_rsa and a public key named id_rsa.pub are generated at default
in the .ssh/ on your home directory.
After registration you will get an email from your PI with :
–
•
Upload your public key on your user account :
–
•
Your slice name : <site_name>_<username>
https://www.planet-lab.eu:443/db/persons/
Now you can login into your PlanetLab account in order to verify your
details. ( you can view your account information by clicking the "View
Account" link)
82
First steps with PlanetLab
Europe
•
•
Logging into PlanetLab machines
–
ssh -l <slicename> <machinename>
–
ssh <slicename>@<machinename>
Executing a command
–
•
ssh -l <slicename> <machinename> <command>
Installing software on a remote machine using ssh
–
To install a program like yum on a PlanetLab machine you have to be root.
–
ssh -n -t -l <slice_name> <machine_name> " su -c ‘command’ "
83
First steps with PlanetLab
Europe
•
For the first time you will connect to any remote machine using ssh you will
get the following warning.
–
That’s because new host keys are added at the first connection to ~/.ssh/known_hosts
•
In case you have an host key which is not updated, you can delete the
offending line of the ~/.ssh/known_hosts file which contains the stale key
•
Other SSH resources
–
http://openssh.com/ : for official releases for numerous linux-based platforms.
–
http://www.chiark.greenend.org.uk/sgtatham/putty/
84
: ssh client for Windows
Copying files to/from PlanetLab
machines
•
•
Using SCP to copy files from/to a PlanetLab machine
–
scp [[user@host1:]filename1 [[user@host1:]filename2
–
Filename1 and filename2 can be file or directory
–
For example :
Using rsync to copy a directory tree from /to PlanetLab machine
–
Rsync is a utility to synchronize two directories to the same content.
–
Rsync example copies the full content of the directory ~/New_trace into the remote
directory ~/PL/New_trace on the remote machine planet1.cs.huji.ac.il
85
Installing packages
•
Since the PlanetLab installation is minimal, application like man, make,
gzip, tar etc. are not installed
•
Installing packages using yum
•
–
curl http://boot.planet-lab.org/alpina/other-scripts/setup_yum.sh | bash
–
yum -y install man gzip less gcc rpm-build
Installing packages using apt-get
–
You can install any rpms using apr-get (freshrpms.net)
86
Compiling your software on
PlanetLab
•
It is not recommended to compile your software on a PlanetLab
machine, since the machines are already heavily loaded.
•
Non-recommended (but possible) option using make
•
A better option: install a local PlanetLab node
–
Install a fedora core 2 machine.
87
Content
Optical
Sensors
Emulation
Wireless
Onelab future(s)
Federation
Monitoring
Virtualization
Operation &
Management
IPR, Legal
Economics, …
Archiving
Benchmarking
88
Support
for SAC
Projects
&
others
www.one-lab.org
From Planetlab
To
Onelab
See also :
- COST ARCADIA
activity
-COST/NSF Workshop
-April 19-20 Berlin
89
Evolution for PlanetLabSlices
Maturity
Deployed
Future
Internet
This chasm is a major
barrier to realizing the
Future Internet
Small Scale
Testbeds
Simulation and
Research Prototypes
Foundational
Research
Time
90
How to virtualize wireless?
• We understand how to virtualize an end-host
–
Time-share a CPU
• We understand how to virtualize a wired link
–
MPLS for, e.g., VPNs
–
GENI will be doing this
• Less certain how to virtualize a wireless link
–
There is interference
–
One must make power and channel choices
–
What if the node is poorly connected?
91
Secure, scalable
measurements?
• Security
–
Packet traces are highly sensitive
–
How can an application track only its own packets?
–
How can network providers set their own access policies for a
common infrastructure?
• Scalability
–
Can active measurements be mutualized among applications?
–
How can information be shared among nodes to reduce
measurement redundancy?
92