Introduction - davelyon.org

Download Report

Transcript Introduction - davelyon.org

Creating a Campus NT
Network using NT4 and
OpenVMS
ES166
Presenters
David Lyon ([email protected])
Systems Analyst
Cal Poly Pomona University
Hari Singh ([email protected])
Information Technology Consultant
Cal Poly Pomona University
Introduction

This session presents a case study of the
implementation of a mixed Windows NT/OpenVMS
environment campus wide. We will discuss the details
related to the implementation including design,
testing, deployment and technical issues. We will also
discuss security, policy/procedures and plans for
future growth.
Rationale



Project was birthed out of the idea that campus users
and technicians would benefit from a unified campus
computing environment.
Possibility of linking the OpenVMS and NT
environment was real with the benefit of a unified
environment and single sign on for users.
Mail, database, interactive, Web and NT access could
be accessed from a single account with details hidden
from the user.
Motivators








Client license hassle with PWV5
Disparate network environments
Duplication of effort was prevalent
Threat of more islands forming
Central IT department becoming isolated from
departments (problem of trust)
Multiple authentication needed for VMS (mail, etc)
and file sharing
Many spare DEC Alpha servers existed (64bit VMS)
NT applications due to be deployed (Citrix,
Peoplesoft,etc.)
Status Quo




There were 20-30 DEC Alpha and VAX servers
scattered about campus running OpenVMS and
PATHWORKS.
Departments were implementing Windows NT and
had little management privilege on the central IT
provided DEC servers.
Each server was stand alone and required separate
account management. The servers were getting little
to no use due to lack of collaboration and resources.
Servers became a nightmare to access due to license
problems (PATHWORKS client).
Departments
Proposed Environment
Central IT Dept
VMS
BDC
Central
Backup
PDC
BDC NT
NT
Switch
VMS
BDC
Utility
VMS
Auth
VMS
BDC
VMS Group
Shares
Cluster/VMS
Initial Design and Technical
Approach





Design based on a single campus wide NT domain
PDC would reside on a server maintained by the
central IT department
Campus technicians would share in the administration
Environment would run over TCP/IP and span
subnets
Existing hardware and software would be used with
minimal expenditures needed
Initial Design and Technical
Approach




Large spare parts cache existed and servers could
still be upgraded (64 to 256MB)
Cooperation and collaboration needed for success
Planned downtime of servers, services and network
hardware was essential
A stable network infrastructure was key
Initial Design and Technical
Approach



Domain - a single domain, campus wide. Trusting
domains would eventually be phased out.
Naming - multiple WINS servers would be
configured to be replication partners.
PDC - the PDC would be a DEC Alpha box running
PATHWORKS Advanced Server. A BDC would be
deployed in the same subnet. It was decided later
that the PDC/BDC should be Windows NT based.
Initial Design and Technical
Approach


Alphas - DEC Alpha servers would be loaded with
OpenVMS 7.1/2 and Advanced server and would act
as department file servers and would be a repository
of "group" personal shares and other data. The
Advanced Server would enable them to be BDCs and
they could be managed using the NT GUI tools
(server manager, etc.).
DFS - Microsoft Distributed File Service would be
installed on the PDC and main BDC and would be
used for mapping common and personal shares.
Initial Design and Technical
Approach

Backups - data that needed to be included in the
central IT backup rotation (off site, heavily monitored
and managed) would be stored on the DEC Alphas. A
backup system was already in place that used
DECnet to back up files across the network to a
central server with a high capacity tape drive. Central
IT staff would handle those backups.
Initial Design and Technical
Approach


Single Signon - Advanced Server would be installed
on the central DEC Cluster (central IT department).
The software would be tuned to have few users and
would primarily be used for single signon.
VMS logins would validate passwords against
Windows NT.
Authentication Flow
Initial Testing

The proposed implementation would be tested on a
subset of hardware to determine workability. The
testing would be done by the team that created the
proposal.
Initial Testing - Preparation




Early versions of PATHWORKS Advanced Server used
(Version 6.0)
Existing DEC 3000 servers upgraded to OpenVMS 7.1
(needed to run Advanced Server s/w)
An Intel box running NT4/SP3 was prepared to act as
PDC
DFS was installed on the PDC.
Initial Testing - Results




Advanced Server - functioned as desired with no
major issues.
Performance - some testing was done on the
DEC3000s but no performance issues were identified.
Single Signon - feature worked but issues arose
later that were ultimately resolved.
Dave Client – (from www.thursby.com) did work
with Advanced server. Dave is an NT client for
Macintosh.
Initial Testing - Results



DFS - DFS (Distributed File System) tested to
determine if it would interoperate with Advanced
Server. No issues.
PW Conversion Utility - this utility designed to
merge existing PWV5 shares/users into the domain,
functioned well.
PDC/BDC – communication (synchronization, etc.)
between the NT and VMS box worked fine (same
subnet). Remote VMS BDC also worked fine.
Initial Testing - Report

The results were reported to management. Testing
participants were impressed with the results and
were inclined that this could go forward in a campus
wide manner.
System Configuration

System configuration (H/W and software) changed as
the project evolved but we report the current
baseline here.
System Configuration - DEC
Alpha 2000/3000






OpenVMS 7.2, TCP/IP 5.0-9, TNT (OpenVMS mgmt
station)
PATHWORKS Advanced Server 6.0B
DECnet over TCP/IP (for backups)
DFO (Dec File Optimizer 2.4)
Perl/UNIX utilities (easier management)
Backup utilities
System Configuration - Alpha
1000 Backup/Data Server




Same as DEC Alpha 3000
DEC Scheduler
Pathworks for OpenVMS (Macintosh)
High capacity tape, disk farm
System Configuration - Cluster






OpenVMS 7.1, TCP/IP 4.2
PATHWORKS Advanced Server 6.0B
Perl/UNIX utilities
OSU 3.0a Web Server (for Web management of
NT/Adv. Server)
SSLeay 0.8.1 for secure connections to Web
Note – we had to go with 7.1 and TCP 4.2. This is a
main frame system running many other applications.
System Configuration - NT DC






Windows NT4/SP6
Services for UNIX (SFU 1.0A)
Timbuktu Pro 32 (2.0)
Microsoft DFS (4.1)
WINS (Windows Internet Name Service)
Diskeeper 4
Redundancy





PDC has dual drives which will be mirrored
BDC can assume PDC role in 30 minutes
Hot spare maintained for DEC 3000s
Utility server (alpha 1000) under Compaq
maintenance
Data stores can be moved quickly to spare servers in
the event of serious hardware failure
Configuration Procedure - DEC
Alpha servers



Install OpenVMS, TCP/IP, DECnet over IP, Perl,
Advanced Server
Configure TCP/IP common services, including PWIP
drivers
Configure Advanced Server as follows...
@sys$update:pwrk$config
Basic Configuration
Transport Configuration
Main Configuration Menu
Configuration Procedure - DEC
Cluster



Install Advanced Server and configure as BDC
(previous slides)
Cluster already configured except for Advanced
Server
VMS accounts could be set to authenticate against
NT using the following command…
$ mcr authorize modify <user>/flags=extauth
Configuration Procedure PDC/BDC





Install Windows NT SP6, SFU, Timbuktu, DFS,
Diskeeper and WINS
Configure DFS
Configure Timbuktu
Configure WINS replication partners
Close security holes
Deployment

Implementation would be phased in. The first order
of business was to move the PDC from an outside
department to the central IT department where it
would reside in the main campus computer room
along with other central servers.
Implementation Phases


VAX6440 Conversion – transition old Vax6440
running Pathworks version 5 to an Alpha 1000
running Advanced Server. The PATHWORKS upgrade
utility was used. Some serious planning was required
but the end result was quite successful.
Alpha 1000 PDC – promote the 1000 to PDC. There
was no WINS server (at the time) in the subnet and
we saw this as a significant problem. Intel PDC was
used instead. (See Technical Issues)
Implementation Phases





Upgrade IT central server to 7.1/Advanced server
(BDC)
Install Advanced Server on main cluster
Create mechanism to merge-in existing Cluster
accounts and enable single signon
Create tools that Help Desk can use to monitor
network and to change passwords
Convert VMS servers to run DECnet over IP
Implementation Phases





Transition PDC to Intel/NT in central IT computing room
Publish management policies
Install DFS (Distributed File Service) on central PDC
Enable management team to add NT accounts via Web
Transition to new Intel/NT PDC (first was a loaner)
Implementation Phases





Configure other DEC3000 servers as BDCs across
campus. The DEC Alphas are configured quickly and
in a uniform way via VMS command procedure.
Upgrade Advanced Server to 6.0B on all DEC servers
Create account audit tool
Set up group directory structure for personal and
common share areas
Install Services for UNIX, Timbuktu and Diskeeper on
PDC
Implementation Phases




Relocate off-site DEC3000 servers to main computer
room, utilizing Cisco VLAN
DEC3000s to remain dedicated data servers to their
departments but would be easily accessible
DEC3000 alphas self maintained due to variety of
spare parts
Configure a Intel/NT BDC in the same subnet as the
PDC (NT PDC hot spare)
Technical Issues



Adv Server .vs. Intel as PDC - felt is wise to
ultimately use Intel/NT as the PDC. Had mixed
recommendations. Advanced server listed as NT
3.51 server manager.
DCE/DFS traffic became an issue.
Single Signon Failure - required Advanced Server
6.0-ECO2. Also, in cluster where Adv. Server not
running on all nodes, need
“def/sys/exec pwrk$acme_server node1,node2,nodex”
in startup. Exclude nodes Adv. Server not running on.
Technical Issues



Administrator Notifications – dial-in WinNT user
complained of getting CPP Administrator broadcasts
while at home.
Advanced Server License - existing PAK was only
good for PATHWORKS Advanced Server 6, not the
Advanced Server that was bundled into OpenVMS
7.2. We opted to stay with version 6 which did not
include a registry or long file names.
Downgrading – it was difficult to downgrade from
Adv. Server 7.2 to 6.0B. We had to deassign logicals,
delete files so install of 6.0B worked.
Technical Issues


Admin tools "set file" command - this tool
generally worked will for setting NT security but we
uncovered a bug with a simple workaround.
There is seemingly no way to remove a rights holder
(I.e. Everyone -> Change). You can change
Everyone to “NONE” but that prevents access.
Workaround is to make sure parent directory has
desired permissions before creating subdirectory.
Technical Issues


Admin Show File - there was an issue with the
security view listing inaccurate information. Compaq
suggested removing V5 security (pwrk$deleteace)
but this was of no help. It appears that later patches
to Advanced Server may have corrected this.
Advice was to trust security as viewed from Windows
NT Properties.
Technical Issues



Directory Caching Bug - bug in Advanced Server
prevents caching of large directories accurately.
Workaround is to disable it at cost of performance.
This is done in lanman.ini.
VMS System Disk Sharing - VMS disks can all be
mapped (disk$). This needs to be disabled in
lanman.ini! The noautoshare keyword in
lanman.ini was a challenge to set due to the many
drives on the cluster.
Performance – DFO (defrag) on OpenVMS made a
noted improvement.
Technical Issues



PW6 Naming Problem – name cached somewhere
in local subnet prevented its use. Clearing caches
(WINS, etc) made no apparent difference. Name was
OK outside of subnet.
Start Pending – this state for the browser in
Advanced Server is normal if another machine is
browse master.
FTP/RSH Passwords – bug causes FTP/RSH to use
uppercase password. Workaround is to set NT
password to uppercase.
Technical Issues



WINS Corruption – browse problems appeared to
be caused by WINS. Remote users could not “find”
servers until WINS was rebuilt. Also, local browsing
was broken due to network mask problem. Cluster
members did not show up in neighborhood.
WIN95 Access – win95 clients needed to join
domain to get access to a member server.
Timbuktu Pro – client characters are not sent to
login screen properly. Cannot login.
Technical Issues


Renaming VMS Server – there were no serious
issues in renaming of a server. One does need to
clear all DECnet caches and node registrations
(@sys$manager:net$configure). The ncl tool
can be used to clear cache. This applies if using
DECnet over IP.
PWIP and DECnet – must have PWIP drivers
loaded (TCPIP$CONFIG) for DECnet over IP.
Technical Issues


Single Signon Failure – according to Compaq
technical support, if NT authentication fails, you must
set the VMS authorize flag (/flags=noextauth). There
is no other way to get OpenVMS logins working
without privileges.
Tech Support also said that a BDC must take over
domain if PDC fails in order for single signon to work
without hanging. Your NT network MUST be stable.
Technical Issues

Admin Password Changes – the issue came up
but there is apparently no way to limit Administrators
that can modify a password. Compaq suggested
using other groups for local administration. This
turned into a BIG issue.
Technical Issues


Conversion Utility – the PATHWORKS V5 to V6
conversion utility functioned well. Make sure you read
the log file and make necessary corrections. Don’t
forget to remove V5 security after you deem
conversion successful.
Extraneous Shares – you may wish to remove
extraneous shares (PWLIC, etc) after Advanced
Server installation.
Technical Issues


PDC Change – a major problem arose upon
changing the PDC from a remote server to a central
server. Major corruption occurred on NT and VMS
boxes. An Advanced server machine took over PDC
but the remaining servers in the domain were badly
confused (multiple PDCs). In the end,
reconfigurations were needed on almost all servers.
The same PDC promotion was done successfully
later but with ALL VMS BDCs shut down FIRST.
Technical Issues


VMS Sam file Corruption – cause was uncertain
but could be due to network outages. Compaq’s
solution was to rebuild it. A technician at Compaq
indicated it was an NT architecture issue possibly
related to network outages.
Network had been going up and down and this
probably accounted for the SAM corruption. Compaq
also recommended configuring Advanced Server with
a minimum of 10 clients.
Technical Issues


The corruption incidents have greatly diminished as
of late possibly due to the network being more
stable. Note that a server restart my work in place of
the SAM rebuild. That should be tried first.
Connection to PDC Fails – Advanced Server
problem seeing PDC. Compaq recommends using
LMHOSTS instead of WINS. An ECO is in the works.
Poltical Issues




Bent against Compaq/OpenVMS
Password changes and single signon
Staffing limitations, lack of interest
Convincing management although blessing was
finally received
Security



Issues discussed at monthly team meetings
Systems well patched and kept up to date
Central IT servers locked down down so that only
central IT staff can manage them (except for Intel
PDC/BDC)
Security



Administrators of other department servers will
employ reasonable security procedures
Central group shares created to enable group
collaboration while keeping individual user directories
private
Quarterly, scheduled audits are done on the NT user
space. The account administration staff will audit the
automated report results.
Security



Some remote administration of OpenVMS servers was
automated via DECnet over IP using a non privileged
account. This allows for rapid duplicate changes on
servers.
On OpenVMS, disable all disks that you don’t want to
be mapped (noautoshare keyword) in lanman.ini
Passwords auto-generated for new permanent
accounts, hard to guess
Policies and Procedures





NT administration team holds monthly meetings to
address issues
Upgrades to core NT servers may involve a special
team effort
Listproc list is used for communication
Naming convention document is used in naming of
resources
Policy document is maintained
Monitoring and Backups


Monitoring for the DEC Alphas is done via backup
job. This utilizes DECnet over IP to keep track of
server status to a central indexed file.
A report job runs on the central server and lists
anomalies and failed backups. The operations and
systems staff are notified. A Web CGI tool exists to
allow the Help Desk to monitor servers via SNMP.
Monitoring and Backups




Backups run at various times based on the central
configuration file (accessed via DECnet over IP and
proxy). Backup completion times are kept in the
configuration file.
Listing files are kept on the remote servers
Central NT servers are not used for data stores and
are backed up quarterly via Ghost by Symantec
(www.symantec.com)
Other servers use the “backup” share on the Alpha
1000
Management and Tools



The documented management and general security
policy are the guideline
The central account administrator will be responsible
for changing passwords and can also create accounts
The account administrator is the only one who is
authorized to synchronize the NT and VMS account
passwords
Management and Tools



A Web CGI was written to allow either user or
Administrator to create NT account
When the user initiates it, the CGI collects the user’s
credentials (must provide to access CGI) and
department and creates the corresponding NT user
and puts the user in the appropriate group.
Personal share space will be automatically created
Management and Tools



The CGI automation minimizes account management
There would be a gradual phase-in of accounts with
this method
Although still being discussed, it was recently decided
that all NT accounts should be added at one time.
Directory Services



Web CGI account creation is reliant on LDAP/X500
directory services
Existing directory is running Compaq X500 and
Infobroker LDAP
The CGI looks in LDAP for the requester’s
department. It then maps that to a configuration file
to determine the associated NT group name
(department names are not to our standard).
Directory Services



The backup job on each DEC alpha server will create
the group share/personal share based on a request
file that is placed by the Web CGI (or other).
If that group has yet to be defined or created, an
Administrator is notified via Email to edit the file so
that the mapping exists.
The job will retry the next day and will create the
group directory and personal space.
Macintosh Users


Appleshare services are still maintained. We are in
the early stages of looking into the “Dave” client for
Mac users. This would enable those users to connect
to NT services, eliminating the need to provide
Appleshare services.
Dave does work with Advanced Server
Services Presently Utilized






Department data stores
Software upgrades
Document sharing
Remote printing
Exchange client
Peoplesoft testing
Future Enhancements









Exchange calendaring
Group discussion lists
Dave Client – de-install PW for Macintosh Services
Automated license tracking
Upgrade licenses and convert to Advanced Server
PMDF mail client
Citrix Metaframe
Peoplesoft deployment
COM (Component Object Model) OpenVMS
Outcome and Summary




With patches and avoidance of known “gotchas”,
environment has proven to be stable
Single signon works well
No major issues since baseline configuration
We are seeing campus wide benefits to this
environment and are able to recommend this type of
environment to others as a cost effective solution to
distributed networking needs.