High Performance and Distributed NAS Server

Download Report

Transcript High Performance and Distributed NAS Server

Spinnaker
Networks, Inc.
N I C
F N O
S D N
U F
S E
T R
R E
Y N
C
E
October 22-23, 2002
www.spinnakernet.com
301 Alpha Drive
Pittsburgh, PA 15238
(412) 968-SPIN
NFS Industry Conference
Page 1 of 30
Storage Admin’s Problem
N I C
F N O
S D N
U F
S E
T R
R E
Y N
C
E
October 22-23, 2002
• ”Everything you know is wrong”
–
… at least eventually
–
space requirements change
–
“class of service” changes
–
desired location changes
NFS Industry Conference
Page 2 of 30
Solution
N I C
F N O
S D N
U F
S E
T R
R E
Y N
C
E
October 22-23, 2002
• System scaling
–
add resources easily
–
without client-visible changes
• Online reconfiguration
–
no file name or mount changes
–
no disruption to concurrent accesses
• System performance
NFS Industry Conference
Page 3 of 30
Spinnaker Design
N I C
F N O
S D N
U F
S E
T R
R E
Y N
C
E
October 22-23, 2002
• Cluster servers for scaling
– using IP (Gigabit Ethernet) for cluster links
– separate physical from virtual resources
– directory trees from their disk allocation
– IP addresses from their network cards
– we can add resources without changing
client’s view of system
NFS Industry Conference
Page 4 of 30
Spinnaker Design
N I C
F N O
S D N
U F
S E
T R
R E
Y N
C
E
October 22-23, 2002
• Within each server
– storage pools
– aggregate all storage with single service class
– e.g. all RAID 1, RAID 5, extra fast storage
– think “virtual partition” or “logical volume”
NFS Industry Conference
Page 5 of 30
Spinnaker Architecture
N I C
F N O
S D N
U F
S E
T R
R E
Y N
C
E
October 22-23, 2002
• Create virtual file systems (VFSes)
– a VFS is a tree with a root dir and subdirs
– many VFSes can share a storage pool
– VFS allocation changes dynamically with usage
– without administrative intervention
– can manage limits via quotas
– Similar in concept to
– AFS volume / DFS fileset
NFS Industry Conference
Page 6 of 30
Spinnaker Architecture
N I C
F N O
S D N
U F
S E
T R
R E
Y N
C
E
October 22-23, 2002
Storage Pool
B
A
adam ant
Bobs
Storage Pool
Eng
Spin
A
Bach
Users
Depts
B
Eng
net
NFS Industry Conference
disk
Page 7 of 30
Spinnaker Architecture
N I C
F N O
S D N
U F
S E
T R
R E
Y N
C
E
October 22-23, 2002
• Create global “export” name space
– choose a root VFS
– mount other VFS, forming a tree
– by creating mount point files within VFSes
– export tree spans multiple servers in cluster
– VFSes can be located anywhere in the cluster
– export tree can be accessed from any server
– different parts of tree can have different CoS
NFS Industry Conference
Page 8 of 30
Global Naming and VFSes
Storage Pool
N I C
F N O
S D N
U F
S E
T R
R E
Y N
C
E
October 22-23, 2002
A
adam
B
ant
Bobs
Spin
Users
Bach
Depts
A
Storage Pool
Eng
Spin
A
Users
Depts
B
Eng
Eng
adam ant
B
net disk
net disk
NFS Industry Conference
Bobs
Bach
Page 9 of 30
Clustered Operation
N I C
F N O
S D N
U F
S E
T R
R E
Y N
C
E
October 22-23, 2002
• Each client connects to any server
– requests are “switched” over cluster net
– from incoming server
– to server with desired data
– based on
– desired data
– proximity to data (for mirrored data)
NFS Industry Conference
Page 10 of 30
Cluster Organization
N I C
F N O
S D N
U F
S E
T R
R E
Y N
C
E
October 22-23, 2002
NFS Industry Conference
Page 11 of 30
Server/Network
Implementation
Client Access
N I C
F N O
S D N
U F
S E
T R
R E
Y N
C
E
October 22-23, 2002
Client Access
Gigabit Ethernet
Gigabit Ethernet
Network Process
Network Process
• TCP termination
• VLDB lookup
• NFS server over SpinFS
• TCP termination
• VLDB lookup
• NFS server over SpinFS
SpinFS Protocol
X
Disk Process
• Caching
• Locking
Gigabit
Ethernet
Switch
Fibre Channel
NFS Industry Conference
Disk Process
• Caching
• Locking
Fibre Channel
Page 12 of 30
Security
N I C
F N O
S D N
U F
S E
T R
R E
Y N
C
E
October 22-23, 2002
• At enterprise scale, security is critical
don’t have departmental implicit “trust”
–
• Kerberos V5 support
For NFS clients
–
–
–
Groups from NIS
For CIFS using Active Directory
NFS Industry Conference
Page 13 of 30
Virtual Servers
N I C
F N O
S D N
U F
S E
T R
R E
Y N
C
E
October 22-23, 2002
• A virtual server consists of
a global export name space (VFSes)
a set of IP addresses that can access it
–
–
• Benefits
additional security fire wall
–
–
a user guessing file IDs limited to that VS
rebalance users among NIC cards
–
–
move virtual IP addresses around dynamically
NFS Industry Conference
Page 14 of 30
Performance – single stream
N I C
F N O
S D N
U F
S E
T R
R E
Y N
C
E
October 22-23, 2002
• 94 MB/sec read
–
single stream read, 9K MTU
• 99 MB/sec write
–
single stream write, 9K MTU
• All files much larger than cache
–
real I/O scheduling was occurring
NFS Industry Conference
Page 15 of 30
Benefits
N I C
F N O
S D N
U F
S E
T R
R E
Y N
C
E
October 22-23, 2002
• Scale single export tree to high capacity
– both in terms of gigabytes
– and ops/second
• Keep server utilization high
– create VFSes wherever space exists
–
independent of where data located in name space
• Use expensive class of storage
– only when needed
– anywhere in the global name space
NFS Industry Conference
Page 16 of 30
Benefits
N I C
F N O
S D N
U F
S E
T R
R E
Y N
C
E
October 22-23, 2002
• Use third-party or SAN storage
– Spinnaker sells storage
– but will support LSI storage, others
• Kerberos and virtual servers
– independent security mechanisms
– cryptographic authentication and
– IP address-based security as well
NFS Industry Conference
Page 17 of 30
Near Term Roadmap
N I C
F N O
S D N
U F
S E
T R
R E
Y N
C
E
October 22-23, 2002
• Free data from its physical constraints
–
data can move anywhere desired within a cluster
• VFS move
–
move data between servers online
• VFS mirroring
–
Mirror snapshots between servers
• High availability configuration
–
multiple heads supporting shared disks
NFS Industry Conference
Page 18 of 30
VFS Movement
N I C
F N O
S D N
U F
S E
T R
R E
Y N
C
E
October 22-23, 2002
• VFSes move between servers
– balance server cycle or disk space usage
– allows servers to be easily decommissioned
• Move performed online
– NFS and CIFS lock/open state preserved
– Clients see no changes at all
NFS Industry Conference
Page 19 of 30
VFS Move
N I C
F N O
S D N
U F
S E
T R
R E
Y N
C
E
October 22-23, 2002
NFS Industry Conference
Page 20 of 30
VFS Mirror
N I C
F N O
S D N
U F
S E
T R
R E
Y N
C
E
October 22-23, 2002
• Multiple identical copies of VFS
– version number based
– provides efficient update after mirror broken
– thousands of snapshots possible
– similar to AFS replication or NetApp’s
SnapMirror
NFS Industry Conference
Page 21 of 30
Failover Pools
N I C
F N O
S D N
U F
S E
T R
R E
Y N
C
E
October 22-23, 2002
• Failover based upon storage pools
– upon server failure, peer takes over pool
– each pool can failover to different server
– don’t need 100% extra capacity for failover
NFS Industry Conference
Page 22 of 30
Failover Configuration
N I C
F N O
S D N
U F
S E
T R
R E
Y N
C
E
October 22-23, 2002
P2
P1
SpinServer
1
P3
SpinServer
2
NFS Industry Conference
P4
SpinServer
3
Page 23 of 30
Additional Benefits
N I C
F N O
S D N
U F
S E
T R
R E
Y N
C
E
• Higher system utilization
– by moving data to under-utilized servers
• Decommission old systems
– by moving storage and IP addresses away
– without impacting users
• Change storage classes dynamically
– move data to cheaper storage pools when possible
• Inexpensive redundant systems
– don’t need 100% spare capacity
October 22-23, 2002
NFS Industry Conference
Page 24 of 30
Extended Roadmap
N I C
F N O
S D N
U F
S E
T R
R E
Y N
C
E
October 22-23, 2002
• Caching
–
helps in MAN / WAN environments
–
provide high read bandwidth to single file
• Fibrechannel as access protocol
–
simple, well-understood client protocol stack
• NFS V4
NFS Industry Conference
Page 25 of 30
Summary
N I C
F N O
S D N
U F
S E
T R
R E
Y N
C
E
October 22-23, 2002
• Spinnaker’s view of NAS storage
– network of storage servers
– accessible from any point
– with data flowing throughout system
– with mirrors and caches as desired
– optimizing various changing constraints
– transparently to users
NFS Industry Conference
Page 26 of 30
Thank
You
N I C
F N O
S D N
U F
S E
T R
R E
Y N
C
E
October 22-23, 2002
Mike Kazar
CTO
NFS Industry Conference
Page 27 of 30
Design Rationale
N I C
F N O
S D N
U F
S E
T R
R E
Y N
C
E
October 22-23, 2002
• Why integrate move with server?
– VFS move must move open/lock state
– Move must integrate with snapshot
– Final transition requires careful locking at
source and destination servers
NFS Industry Conference
Page 28 of 30
Design Rationale
N I C
F N O
S D N
U F
S E
T R
R E
Y N
C
E
October 22-23, 2002
• Why not stripe VFSes across servers?
– Distributed locking is very complex
– and very hard to make fast
– enterprise loads have poor server locality
– as opposed to supercomputer large file patterns
– Failure isolation
– limit impact of serious crashes
– partial restores difficult on stripe loss
NFS Industry Conference
Page 29 of 30
Design Rationale
N I C
F N O
S D N
U F
S E
T R
R E
Y N
C
E
October 22-23, 2002
• VFSes vs. many small partitions
– can overbook disk utilization
– if 5% of users need 2X storage in 24 hours
– can double everyone’s storage, or
– can pool 100 users in an SP with 5% free
NFS Industry Conference
Page 30 of 30