XP 102 – MetaFrame XP in the Wild: Notes from the Field

Download Report

Transcript XP 102 – MetaFrame XP in the Wild: Notes from the Field

Citrix
MetaFrame XP
for Windows
Agenda
Intro to MetaFrame XP

What is MetaFrame XP?

What’s New in MetaFrame XP?

How is MetaFrame XP Packaged?
New Terms and Architectural Concepts
MetaFrame XP Features (technically speaking)
MetaFrame XP Advanced Management
Time permitting:

Migrating to MetaFrame XP

Useful Command Line Utilities
Intro to
MetaFrame XP
What is MetaFrame XP?




The next generation of Citrix’s MetaFrame application serving
software.
The product of a ground up reassessment by our engineers
coupled with your input on what enterprise class server-based
computing should be.
Built to eliminate current and future obstacles to speed,
performance and control while maintaining backward compatibility
for ease of migration.
Everything you have seen in MetaFrame 1.8/FR1 and MORE.
What’s New in MetaFrame XP?

Citrix Management Console

Application-based Load Management

System Monitoring & Analysis

Application Packaging & Delivery

Network Management

Printer management

Shadowing enhancements

Client time zone support

Active Directory support

NFuse ready

Enhanced scalability

Up to 1,000 servers in server farm!
What’s New in MetaFrame XP?

Reduced IT administration

Reduced network traffic

Centralized license management


Enterprise-wide license pooling

Enhanced license availability
Citrix administrator accounts

Read/Write & Read Only access

ICA client extensibility

MetaFrame and WinFrame interoperability

‘Mixed’ or ‘Interoperability’ mode
How is it Packaged?



The enterprise application serving infrastructure for Netbased Windows 2000 environments requiring extensive
scalability, rapid application delivery and robust
management—enabling unparalleled command and control
The advanced application serving platform for Windows 2000
servers and beyond designed for growing organizations that
need to maximize application availability and manageability
across the Net—all from a single point
The rapid application serving system designed for to extend
the reach of Windows 2000 Server to any device, any
departmental workgroup connection—wired, wireless, Web
MetaFrame XP Family Comparison
Functionality
Enterprise Application Serving Infrastructure

Corporate-wide deployment

20-1000+ servers in a farm
Advanced Application Serving Platform

Multiple departments and applications

2-100 servers in a farm
Base Application Serving System

Workgroup or specific application

Individual and non-load balanced servers
Size and Scope of Installation
How is it Packaged?
MetaFrame XPs

Base MetaFrame XP functionality
MetaFrame XPa

Base MetaFrame XP functionality

Load Management
MetaFrame XPe

Base MetaFrame XP functionality

Load Management

System Monitoring & Analysis

Application Packaging & Delivery

Network Management
Licensed per Connection!

Deploy as many servers as you need…
New Terms and
Architectural
Concepts
New Terms

IMA: Independent Management Architecture

Data Store: Central configuration database




LHC: Local Host Cache (Persistent data cache that exists on each
server)
Data Collector: Manages dynamic data and client
enumeration/resolution (replaces ICA Master Browser)
Zone: Deliberate grouping of MetaFrame XP servers, each with its
own Data Collector
CMC: Citrix Management Console (replaces MetaFrame 1.8
administration tools)
What is IMA? Why is it important?
IMA…




Is a TCP-based, event driven messaging bus, used by MetaFrame
XP servers.
Is a modular and easily extensible subsystem capable of
supporting current and future MetaFrame products and tools.
Overcomes the scalability constraints of the MetaFrame 1.8
platform, allowing MetaFrame XP to scale environments to new
levels.
Provides capability to administer any farm from a central tool
(CMC) that doesn’t have to run on a MetaFrame server.
Independent Management Architecture
Citrix Management
Console
MetaFrame XP
NT 4.0 TSE
MetaFrame XP
Windows 2000
Independent Management Architecture (IMA)
DB
Central Data Store
•SQL, Oracle, Access
Load
Management
Application
Packaging &
Delivery
System
Monitoring &
Analysis
MetaFrame Server Farms
MetaFrame 1.8:


Server Farms in MetaFrame 1.8 are a collection of servers on a
given broadcast segment that are managed as a single unit.
Server Farms in MetaFrame 1.8 may also be defined by sharing a
common ‘Application Set’.
MetaFrame XP:


The Server Farm in MetaFrame XP defines the scope of
management as well as the ‘Application Set’.
Server Farms in MetaFrame XP are designed to operate across
segments and are managed through the Citrix Management
Console.
MetaFrame 1.8/ICA Browser
MetaFrame 1.8/ICA Browser Attributes

Server Farms cannot span segments.

Each segment has ONE ICA Master Browser.


ICA Master Browser stores dynamic data for the segment and
handles Enumeration/Resolution for ICA clients.
Persistent data stored in registry (farm membership, licenses,
published applications, etc.).
ICA Master
Browser
MFAdmin,
PAM, etc.
Segment 1
10.1.1.x
Farm 1 (2, 3)
ICA Master
Browser
MFAdmin,
PAM, etc.
Segment 2
10.1.2.x
Farm 4 (5,
6)
MetaFrame 1.8/ICA Browser
MetaFrame 1.8/ICA Browser Attributes



Persistent data read by ICA browser/PN Service at startup.
Cross server configuration tools read/write to registry on all
servers.
Servers communicate via UDP broadcasts, remote REG calls,
RPCs, etc.
ICA Master
Browser
MFAdmin,
PAM, etc.
Segment 1
10.1.1.x
Farm 1 (2, 3)
ICA Master
Browser
MFAdmin,
PAM, etc.
Segment 2
10.1.2.x
Farm 4 (5,
6)
MetaFrame XP/IMA
MetaFrame XP/IMA Attributes

Server farms can span segments, can contain multiple zones.

Each zone has ONE Data Collector.
Persistent farm data stored in shared, persistent Data Store.
Server
Farm
CMC
DC
DC
DS
Zone 2
LHC
LHC
LHC
LHC
Zone 1
LHC

Data Collectors store dynamic data and handle
Enumeration/Resolution for ICA clients.
LHC

MetaFrame XP/IMA
MetaFrame XP/IMA Attributes
Servers communicate via IMA (TCP).
Server
Farm
CMC
DC
DC
DS
Zone 2
LHC
LHC
LHC
Zone 1
LHC

Management tool communicates via IMA to Data Store and member
servers.
LHC

Persistent data read from DS at startup, cached in Local Host
Cache.
LHC

Data Store
Attributes of the MetaFrame XP Data Store (DS)




The DS is a repository (database) which contains persistent, farmwide data, such as member servers, licenses in farm, zone configs,
printers/drivers, published apps, load evaluators, trust
relationships, etc.
Each MetaFrame XP farm shares one Data Store.
All information in the DS is stored in an encrypted binary format
(except indexes).
A farm can operate for 48 hours if DS is unavailable, then licenses
time out and no new users can connect.

A DS can be an Access, MS SQL, or Oracle database.

A DS can be configured for either ‘Direct’ or ‘Indirect’ access.
Data Store in ‘Direct’ Mode
Attributes of Direct Mode

Uses Microsoft SQL 7/2000 or Oracle 7.3.4/8.0.6/8.1.6 database.

Servers initialize directly from the DS via ODBC.
Servers maintain an open connection to the database for
consistency checks.
LHC
LHC
LHC
LHC
LHC
DS
LHC

Data Store in ‘Indirect’ Mode
Attributes of Indirect Mode
If using JET database, MF20.MDB lives on the ‘IMA host’ server.
DS
LHC
LHC
LHC
IMA Host
(indirect mode)
DC
LHC

Member servers communicate via through ‘IMA host’ server to
read/write to data store.
LHC

Uses JET 4.x, Microsoft SQL 7/2000 or Oracle 7.3.4/8.0.6/8.1.6
database.
LHC

Local Host Cache (LHC)
Attributes of the Local Host Cache


A subset of the Data Store, stored on each individual server
(IMALHC.MDB).
Contains basic info about servers in farm, pub. apps and
properties, trust relationships, server specific configs (product
code, SNMP settings, load evaluators, etc.).

Used for initialization if DS is down.

Used for ICA client application Enumeration.
Server
Farm
CMC
DC
DC
DS
Zone 2
LHC
LHC
LHC
LHC
LHC
LHC
Zone 1
Data Collectors
Attributes of Data Collectors
There is a DC for each Zone.
Server
Farm
CMC
DC
DC
DS
Zone 2
LHC
LHC
LHC
LHC
Zone 1
LHC

A DC stores dynamic information about a farm, such as servers
up/down, logons/logoffs, disconnect/reconnect, license in
use/released, server/application load, etc.
LHC

Data Collectors
Attributes of Data Collectors
DC’s distribute most persistent data changes to member servers
for LHC update.
Server
Farm
CMC
DC
DC
DS
Zone 2
LHC
LHC
LHC
LHC
Zone 1
LHC

DC’s handle all ICA client Resolution activity, should handle all
Enumeration activity. ANY DC can Resolve ANY app for ANY client
(DC’s are peers in a multi-zone implementation).
LHC

Zones
Attributes of Zones

Logical, centrally configurable grouping of MetaFrame XP servers.

Each Zone has one Data Collector (DC).

Can span IP networks (LAN, WAN).

Aren’t necessarily tied to an IP segment (only by default).
Server
Farm
CMC
DC
DC
DS
Zone 2
LHC
LHC
LHC
LHC
LHC
LHC
Zone 1
Zones
Attributes of Zones

Are useful for partitioning/controlling persistent data update traffic
and for distributing ICA client Enumeration/Resolution traffic.

A Zone can contain up to 256 hosts without a registry modification.

In most cases, fewer zones are better!
Server
Farm
CMC
DC
DC
DS
Zone 2
LHC
LHC
LHC
LHC
LHC
LHC
Zone 1
Citrix Management Console (CMC)
Attributes of the CMC

Central management tool where 98% of farm
configuration/maintenance occurs.

Extensible framework that allows different tools to ‘snap in’.

Doesn’t need to run on a MetaFrame server.
Server
Farm
CMC
DC
DC
DS
Zone 2
LHC
LHC
LHC
LHC
LHC
LHC
Zone 1
Citrix Management Console (CMC)
Attributes of the CMC

Works through the IMA service (dest. port 2513) to access DS, DC,
and member servers.

Should be run through a DC that has local access to the DS.

Is the most read/write intensive usage of the DS.
Server
Farm
CMC
DC
DC
DS
Zone 2
LHC
LHC
LHC
LHC
LHC
LHC
Zone 1
Demonstration:
CMC in Action
MetaFrame XP’s Communication
Communication ‘Layers’ (5 of them)


IMA (server to server) Communication

Persistent Data Events (1)

Dynamic Data Events (2)

Printer Management Events/Processes (3)
ICA Session (client to server) Communication

Client Enumeration/Resolution (4)

ICA Client to Server (5)
IMA Persistent Data Events (1)
Communication Events
IMA Service Initialization

Periodic Consistency Check (configurable timing)

Key: HKLM\Software\Citrix\IMA\DCNChangePollingInterval

Default value: 600000 milliseconds REG_DWORD: 0x927C0
DS
LHC
LHC
LHC
IMA Host
(indirect mode)
DC
LHC
LHC

Read heavy initialization/validation of the LHC
LHC

IMA Persistent Data Events (1)
Communication Events
Farm modification through CMC

Modifications happen through a 2-phase process:

1: CMC commits the change to the DS.
If member servers are unavailable, they receive the change
during the periodic LHC consistency check.
LHC
DC
LHC
CMC
LHC
DC
DS
LHC

2: CMC/IMA packages and distributes changes <10k to the DCs,
who then distribute it to member servers. If the change is >10k,
it distributes a change notification and servers perform
consistency check on LHC.
LHC

LHC

IMA Dynamic Data Events (2)
Communication Events

Member server notifies it’s DC of the change.

The member server’s DC notifies ALL other DC’s of the change.
LHC
DC
LHC
LHC
DC
LHC
Note: DC’s have a peer-to-peer relationship. Every DC knows what
every other DC knows.
LHC

Any state change on server (logon/logoff, disconnect/reconnect,
load change) triggers a dynamic data update.
LHC

IMA Dynamic Data Events (2)
Communication Events

Key: HKLM\Software\Citrix\IMA\Runtime\KeepAliveInterval

Default value: 60000 milliseconds REG_DWORD: 0xEA60
DC to DC consistency check.
LHC
LHC
DC
LHC
Default value: 300000 milliseconds REG_DWORD: 0x493E0
DC

Key:
HKLM\Software\Citrix\IMA\Runtime\Gateway\ValidationInterval
LHC

LHC

Member server to zone DC heartbeat check.
LHC

IMA Printer Management Events (3)
Communication Events

Printer Management has a relatively substantial impact upon IMA
traffic.
ICA Session Communication (4)
Client Enumeration/Resolution
Client to MetaFrame XP: Client asking, server answering ‘what
apps can I run?’ and ‘where do I go for this app?’
DC
Resolution (where do I go…): Client to MetaFrame XP DC,
TCP80 (default with TCP+HTTP server location), or UDP1604
(with TCP/IP server location).
LHC

Enumeration (what apps…): Client to MetaFrame XP server,
TCP80 (default with TCP+HTTP server location), or UDP1604
(with TCP/IP server location), enumerated from LHC on
MetaFrame server.
LHC

LHC

ICA Session Communication (4)
Client Enumeration/Resolution
NFuse to MetaFrame XP: NFuse asking (on behalf of the client) and
MetaFrame XP answering the above questions.
Resolution (where do I go…): NFuse to MetaFrame XP DC,
TCP80 (default), or SSL, builds ICA file with resulting info.
DC

Enumeration (what apps…): NFuse to MetaFrame XP server,
TCP80 (default), or SSL, enumerated from LHC on MetaFrame
server, presented to web browser as hyperlinks.
NFuse
LHC

Client web browser talking HTTPS to NFuse server for both
processes.
LHC

LHC

ICA Session Communication (5)
ICA Client to Server
Actual ICA session stream from the ‘Connector’ (ICA client) to a
MetaFrame XP server, destination port TCP1494 (default).
LHC
LHC
DC
(launched
from
either UI)
LHC

IMA In Depth
Hardware and Software Configuration






Load up on processors and memory
Have home directories on separate server
Roaming profiles in multi-server enviroments
 Q161334-Guide to Windows NT 4.0 Profiles
and Policies
NTFS partitions only ( at least 4096 cluster )
Install only required network components and
protocols
Change drive letters at installation time only
Hardware and Software Configuration

For 4 and 8 processors systems, use one
controller for OS and one for applications and
temporary files.
 Dedicate a drive for page file for best
performance.
 Increase Maximum Registry Size to 100 MB.
 See MF Install and Tuning Tips for more info.
Selecting a Data Store
Direct Mode
IMA directly querying the database
• Microsoft SQL Server 7 or 2000
• Oracle 7.3.4, 8.0.6, or 8.1.6
Indirect Mode
IMA requesting another server to query the database
on its behalf
• Gathering its DS information indirectly from another
server who is accessing the DS directly
Data Store Info
Indirect Mode
Select Use
a local database as the data store to enable
Indirect mode to Access (Direct Mode is not available for
Access) on the first server installed. All subsequent servers
joining the farm must be installed with the Connect to a data
store set up locally on another server option.
First server will be Zone DC by default.
Server hosting the Access DS will be the only server to write
to the Access database.
Server hosting the DS in Access acts as proxy for all other
servers.
Overcomes the file locking and corruption problems common
with Access.
Data Store Info
Using Access
Approximately
20 MB of disk space should be available for
every 100 servers in the farm.
32 MB of additional RAM is recommended if the MetaFrame
XP server will also host connections.
Need MDAC 2.5 SP 1 installed on TSE. Stop TS Licensing
Service before Installing MDAC. Reboot.
 %ProgramFiles%\Citrix\Independent Management
Architecture\MF20.MDB ( System must have read/write
access)
The default user name/password is citrix/citrix. To change the
password on the database, use the dsmaint config
/pwd:newpassword command with the IMA service running.
Data Store Info
Using Access
Each
time the IMA service is stopped gracefully, the existing
mf20.mdb file is backed up, compacted, and copied as
mf20.unk. Each time the IMA service starts successfully, it
deletes any existing instance of mf20.bak and then renames the
mf20.unk file to mf20.bak. This file is used when the dsmaint
recover command is executed.
If the server runs out of disk space on the drive where the
mf20.mdb file is stored, the automatic backup stops
functioning. Always ensure there is enough disk space to hold
3 times the size of the mf20.mdb.
Perform backup of DS with DSMAINT BACKUP
Data Store Info
Using SQL
Approximately
20 MB of disk space for every 100 servers in
the farm. The disk space used may increase if there are a large
number of published applications in the farm.
The temp database should be set to Auto Grow on a partition
with at least 1 GB of free space (4 GB is recommended if it is a
large farm with multiple print drivers).
Verify that enough disk space exists on the server to support
growth of both the temp database and the farm database.
Use MDAC 2.5 SP1 on TSE. Do not use MDAC 2.6 with SQL
2000. Known bug.
Data Store Info
Using SQL
When
using Microsoft SQL Server in a replicated environment,
be sure to use the same user account on each Microsoft SQL
Server for the DS.
Each MetaFrame XP farm requires a dedicated database.
However, multiple databases may be running on a single
Microsoft SQL Server.
The MetaFrame XP farm should not be installed in a
database that is shared with any other client-server applications.
Databases should have the Truncate log on Checkpoint
option set to keep log space controlled.
Ensure DS is backed up whenever a change is made via CMC.
Data Store Info
Using SQL
For
high security environments, Citrix recommends using NT
Authentication only.
The account used for the DS connection should have
db_owner (DBO) rights on the database that is being used for
the DS.
If tighter security is required, after the initial installation of the
database as DBO, the user permissions may be modified to be
read/write only.
If installing more than 256 servers in a farm, increase number
of worker threads available for database.
Data Store Info
Using Oracle
Approximately
20 MB of disk space for every 100 servers in
the farm. The space used may increase if there are a large
number of published applications in the farm.
The Oracle Client (version 8.1.55 or 8.1.6) must be installed on
the terminal server prior to the installation of MetaFrame XP.
The 8.1.5 and 8.1.7 clients are not supported with MetaFrame
XP.
The server should be rebooted after installation of the Oracle
Client, or the MetaFrame XP installation fails to connect to the
DS.
Data Store Info
Using Oracle
Oracle8i version
8.1.6 or later is recommended. However,
Oracle7 (7.3.4) and Oracle8 (8.0.6) are supported for the
MetaFrame XP platform.
Creating a separate tablespace for the DS simplifies backup
and restoration operations.
A small amount of data is written to the system tablespace. If
experiencing installation problems, verify that the system
tablespace is not full.
Using Shared/MTS (Multi-Threaded Server) mode may reduce
the number of processes in farms over 200 servers. Consult the
Oracle documentation on configuring the database to run in
MTS mode.
Data Store Info
Using Oracle
Oracle
for Solaris supports Oracle authentication only.
Oracle user account must the the same for every server in the
farm because all servers share a common schema.
This account needs the following permissions:
Connect
Resource
Dedicating a server for Indirect Mode
May
be necessary when the following occurs:
Delays in using CMC
Increased IMA service start times due high CPU utilization on
server hosting DS.
Cut maximum users to one half to two thirds of full load to
improve performance.
Bandwidth Requirements
In
a single server configuration, a single server reads
approximately 275 KB of data from the DS. The amount of data
read is a function of the number of published applications in the
farm, the number of servers in the farm, and the number of
printers in the farm. The number of kilobytes read from the DS
during startup can be approximated by the following formula:
KB Read = 275 + 5*Srvs + 0.5*Apps + 92*PrintD
Where:
Srvs = Number of servers in the farm
Apps = Number of published applications in the farm
PrintD = Number of print drivers installed on the member
server
Data Store Info
High


Latency WAN Concerns
Without use of replicated databases, may create
situations where DS is locked for extensive periods
of time when performing maintenance
A high latency situation reads should not adversely
affect any local connections, but the remote site may
experience slow performance.
Replicated


Databases
Speed up performance if there is enough
MetaFrame servers to justify the cost
Database replication will consume bandwidth but is
controlled through the database chosen, not
MetaFrame
Data Store Info
Access
is best used for centralized farms.
Access supports only indirect mode for other servers, and as
such will have slower performance then a direct mode DS on
large farms.
Database replication is not supported with Access.
Databases supporting replication should be used when
deploying large farms across a WAN.
Server farms with over 100 servers should use SQL or Oracle
to remain at acceptable performance levels.
Data Store Info
Farms
using excessive printer drivers and scheduled
replication should use SQL or Oracle.
Farms that cycle boot large groups of servers simultaneously
should use SQL or Oracle in direct mode to minimize the IMA
service start time.
Both Microsoft SQL Server and Oracle are very similar in
performance. In the Citrix Test eLabs both database servers
performed similarly with large farms. The choice between the
two should be based on feature sets of the databases, in-house
expertise, management tools, and licensing costs rather than
performance numbers
Use Microsoft Clustering Services with SQL or Oracle
Parallel Server with Oracle for fault tolerance.
Data Store Info
DS
Query Interval
• Key: HKLM\Software\Citrix\IMA\DCNChangePollingInterval
• Default value: 600000 milliseconds REG_DWORD: 0x927C0
If
a member server is unable to contact the data store
for 48 hours, licensing will stop functioning on the
member server
CMC always connects directly to the DC
Change > 10K in size, all member servers in the
farm will be sent a change notification and query the
DS for the change
Data Distribution with Data Collectors





Server 1 writes information to the DS
Server 1 sends change notification to its zone
DC
Zone DC distributes change notification to all
member servers in its zone
Other zone DC’s receive notification and
distribute it to all member servers within their
respective zones
All member servers receive the notification and
update their LHC as requested
Data Distribution with Data Collectors

Inter-zone connection formula
• N * (N-1)/2, where N is the number of zones in the farm

IMA ping configuration parameter
• Key: HKLM\Software\Citrix\IMA\Runtime\KeepAliveInterval
• Default value: 60000 milliseconds REG_DWORD: 0xEA60

Zone DC synchronization parameter
• Key:HKLM\Software\Citrix\IMA\Runtime\Gateway\ValidationInterval
• Default value: 300000 milliseconds REG_DWORD: 0x493E0

Inter-zone connection formula
•
Key:HKLM\Software\Citrix\IMA\Runtime\MaxHostAddressCacheEntriesl
• Default Value: 256 Entries REG_DWORD: 0x100
Data Distribution with Data Collectors

Bandwidth requirements between zones
 Connect: ~3Kb
 Disconnect: ~2.25Kb
 Reconnect: ~2.91Kb
 Logoff: ~1.50Kb
 CMC: ~2.23
 Application Publishing: ~9.07
Data Collector Elections

Each zone is responsible for electing its own data
collector (DC). By default, the first server in the
farm becomes the DC and is set to Most Preferred.
If the setting is changed from Most Preferred,
another election will take place. DC elections are
won based on the following criteria:
1. Highest Master Version Number (1 for all
MetaFrame XP 1.0 servers)
2. Lowest Master Ranking (1=Most Preferred –
4=Not Preferred)
3. Highest Host ID (0-65536 randomly assigned at
installation)
Data Collector Elections







To view server’s ranking, use Queryhr ( copy from
support\debug\i386 on CD
DC elections are triggered in the following
situations:
A member server loses contact with the DC.
The DC goes offline.
A farm server is brought online.
The querydc -e command is executed to force
an election.
Zone configurations are changed (i.e. zone
name, election preference, adding or removing
servers)
Data Collector Elections







When a new DC is elected, all servers in the zone send a
complete update to the new DC. The following formula
can be used to approximate the amount of data in bytes
sent by all servers in the zone to the new zone DC:
Bytes = (11000 + (1000 * Con) + (600 * Discon) + (350
* Apps)) * (Srvs - 1)
Where:
Con = Average number of connected sessions per server
Discon = Average number of disconnected sessions per
server
Apps = Number of published applications in the farm
Srvs = Number of servers in the zone
Local Host Cache
Attributes of the Local Host Cache
A subset of the Data Store, stored on each
individual server (IMALHC.MDB).

Contains basic info about servers in farm,
pub. apps and properties, trust relationships,
server specific configs (product code, SNMP
settings, load evaluators, etc.).


Used for initialization if DS is down.
Used for ICA client application
Enumeration.

Local Host Cache
On
the first startup of the member server, the LHC is populated
with a subset of information from the DS. From then on, the
IMA service is responsible for keeping the LHC synchronized
with the DS. The IMA service performs this task through change
notifications and periodic polling of the DS.
In
the event the DS is unreachable, the LHC contains enough
information about the farm to allow normal operations for up to
48 hours.
During this “grace” period, the server continues to service
requests while the IMA service attempts to connect to the DS
periodically (based on the DS query interval as described in the
Data Store Activity section of the IMA Architecture chapter of
this document). If the DS is unreachable for 48 hours, the
licensing subsystem fails to verify licensing and the server stops
taking incoming connections.

Local Host Cache
Because
the LHC holds a copy of the published applications
and NT trust relationships, ICA Client application
enumeration requests can be resolved locally by the LHC.
This provides a faster response to the ICA Client for
application enumerations because the local server does not
have to contact other member servers or the zone DC. The
member server must still contact the zone DC for LM
resolutions.
If
the IMA service is currently running, but information in
the CMC appears to beincorrect, a refresh of the LHC can be
manually forced by executing dsmaint refreshlhc from the
command prompt of the affected server. This action forces
the LHC to read all changes immediately from the DS.
Local Host Cache
If the IMA service does not start, it may be caused by
a corrupt LHC.
.
1. Verify the DS is available before continuing because
this procedure causes the LHC to be reloaded directly
from the DS.
2. Stop the IMA service on the MetaFrame server.
3. Launch the ODBC Data Source Administrator. On
Windows 2000, choose Control Panel |
Administrative Tools | Data Sources (ODBC). On
TSE choose Control Panel | ODBC Data Sources.
4. Select the File DSN tab.
Local Host Cache
5. Open the imalhc.dsn file located in
%ProgramFiles%\Citrix\IndependentManagement
Architecture by default.
6. Once that file is selected, click on Create from the
ODBC Setup screen.
7. Enter in any name besides imalhc for the new LHC
database. Optionally, rename the old imalhc and reuse
the name.
8. Exit the ODBC Data Source Administrator.
Local Host Cache
8. Exit the ODBC Data Source Administrator.
9. Modify the following registry value:
Key: HKEY_LOCAL_MACHINE\SOFTWARE\Citrix\
IMA\RUNTIME
Value: PSRequired REG_DWORD: 0x1
10. Restart the IMA service.
Note: The DS server must be available for this
procedure to work. If the DS is not available, the IMA
service fails to start until the DS is available.
Security
Considerations
IMA Security
Always
install on NTFS partitions
Make sure the LHC is secure
•%SystemDrive%\Program Files\Citrix\Independent Management Architecture
•Give access to “System” and “Administrators” group with Full Control only
Run
the CMC from the console only
Run the CMC as a published application if wanting
to run on a separate machine
Make sure encryption is used for traffic between DS
and MetaFrame servers
Run the MFCfg.exe utility and remove the
“Everyone” group from each of the listeners
IMA Security
When
using the “Local Database” the MS Access
username/password is citrix/citrix. This should be
changed using DSMAINT.
The user account used to access the SQL Server
database must have “public” and “db_owner” roles
on the database that houses the DS. Do not grant
user account access to this.
SA accounts are not needed for DS access with SQL
Server. If using Oracle, do not use the SYSTEM or
SYS account.
IMA Security
A domain
user group should be used to administer
MetaFrame servers:
In the accounts domain, create a global group called
“MFAdmins”
Add domain users who will have administrative
privileges to the MFAdmins global group
Add the MFAdmins global group to each MetaFrame
server’s local administrators group
Whenever a new user account needs to be configured for
MetaFrame admin privileges, add the new account to the
MFAdmins global group
Active Directory
Single AD use a Domain Local Group
Farms that span a forest, use a universal group
Optimizations
Disk Subsystem
Disk
Caching
Lazy writes occur when data is cached instead of immediately
written to disk. If data is being sent across the network or the
server has a caching controller card, disabling lazy writes
improves performance. Network and local lazy writes can be
disabled by modifying the following registry settings:
Key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\
Services\Lanman Server\Parameters
Value: IRPStackSize REG_DWORD: 0x6
Key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\
Service\Lanman WorkStation\Parameters
Value: UtilizeNtCaching REG_DWORD: 0x0
Disk Subsystem
I/O
Locks
The registry setting IoPageLockLimit specifies the limit of the
number of bytes that can be locked for I/O operations. Since
RAM is being sacrificed for increased disk performance, the
optimal setting for this value should be determined through
pilot tests. Changing this setting from the default can speed
up file system activity. Use the table below as a guide for
changing the registry setting.
Server RAM (MB) IoPageLockLimit (Decimal) IoPageLockLimit (Hex)
64-128
256
512
1024+
4096
8192
16384
65536
1000
2000
4000
10000
Disk Subsystem
The
registry setting can be modified as follows Key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\
Control\Session Manager\Memory Management
Value: IoPageLockLimit REG_DWORD
Default: 0 (512 KB is used)
For additional information on the IoPageLockLimit registry
setting, refer to the Microsoft Knowledge Base articles
Q121965 and Q102985.
Disk Subsystem

Last Access Update
 The NTFS file system stores the last time a file is
accessed, whether it is viewed in a directory listing,
searched, or opened. In a multi-user environment, this
updating can cause a small performance decrease.
Modifying the following registry setting disables this
feature:
Key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\
Control\FileSystem
Value: NtfsDisableLastAccessUpdate REG_DWORD: 1
Memory Subsystem


The paging file should be placed on its own disk
controller or on a partition that is separate from the OS,
application, and user data files. If the paging file must
share a partition or disk, place it on the partition or disk
with the least amount of activity.
Always set the paging file initial size to be the same
as the maximum size to prevent disk fragmentation
of the paging file. The optimal size of a paging file is
best determined by monitoring the server under a
peak load. Set the paging file to be 3-5 times the
physical RAM, and then stress the server while
observing the size of the paging file. To conserve
resources, the paging file should then be set to a
value slightly larger than the maximum utilized while
Memory Subsystem

Single-server scalability may be improved by
manually adjusting the page table entries (PTE) in
the registry. The NT kernel uses PTE values to
allocate physical RAM between two pools of
memory. By manually setting the maximum space
allocated to the System PTE, the remaining space
may be used to increase the number of users
supported on the server. Determining the optimal
configuration for PTE values is a complex task. For
detailed information see the Microsoft Knowledge
Base article Q247904. A Kernel Tuning Assistant for
Windows 2000 servers is also available from
Microsoft.
Network Subsystem

Most 10/100-based network cards auto-sense the network
speed by default. Manually setting these cards prevents the
auto-sensing process from interfering with communication
and forces the desired speed.
 If working in a mixed Windows 2000 and TSE environment,
additional performance can be gained by modifying the
network request buffer size on the TSE servers. Increasing
this value to 65536 bytes, from the default of 4356 bytes,
significantly improves LAN Manager file writes. For more
information, seeMicrosoft Knowledge Base article Q279282.
Key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentContolSet\
Services\LanmanServer\Parameters
Value: SizReqBuf REG_DWORD: 65536
Range: 512 bytes to 65536 bytes
Network Subsystem

The server may refuse connections due to self-imposed limits
specified by the MaxMpxCt and MaxWorkItem registry
values. The users may see the following errors:
“System could not log you on because domain <domainname>
is not available”
“You do not have access to logon to this session”
Before changing these values, read the Microsoft Knowledge
Base article Q232476. When modifying these registry
settings, be sure that the MaxWorkItems value is always 4
times the MaxMpxCt value. Suggested new values for
MaxMpxCt and MaxWorkItems are 1024 and 4096
respectively.
Key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\
Services\LanmanServer\Parameters
Network Subsystem
Key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\
Services\LanmanServer\Parameters
Value: MaxWorkItems REG_DWORD: 4096
Value: MaxMpxCt REG_DWORD: 1024
 To ensure that a host server is quickly aware of dropped
sessions, the two TCP registry settings listed below can
be modified with the following moderately aggressive
values:
Key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\
Services\Tcpip\Parameters
Value: KeepAliveTime REG_DWORD: 0000ea60
Value: KeepAliveInterval REG_DWORD: 000003e8
Network Subsystem
Caution: Aggressive parameters may cause TCP/IPbased
communications to time out prematurely. These
parameters
should be adjusted as necessary to prevent this behavior.
For more information, see CTX708444: How to Set TCP
Keep Alives so TCP/IP Users Go To Disconnected State
in the Citrix Knowledge Base.
Server Configuration




In some instances, modifying the NT application
performance setting may provide an additional
performance boost. Set the Application performance
slider to None.
When opening remote procedure call (RPC) aware
applications such as Windows Explorer and Control
Panel, delays of several minutes may be the result of
incorrect service startup settings. Verify that the RPC
service Startup type is set to Automatic and the RPC
Locator service Startup type is set to Manual.
Set Server Service to Maximize throughput for Network
Applications.
Disable AutoGrammar for Microsoft Word.
MetaFrame XP
Features Revealed
MetaFrame XP Management
Centralized Administration
Single Point Command and Control

All administration, configuration, monitoring and control of the
Citrix Server Farm is managed centrally.
Independent Management Architecture

IMA-compliant servers and management products share a common
and extensible management infrastructure.
Unified Management Console

The Citrix Management Console communicates across a single
Management Scope of the server farm using the IMA protocol.
Central Data Store

Configuration information for the Server Farm is stored centrally in
the Citrix Data Store.
Citrix Management Console
Centralized License Management
Description: Licenses are installed into the Central Data Store and
managed centrally from the Citrix Management Console.
Benefit: Administrators can easily manage all of the licenses for
the Citrix Server Farm from a single point.
License Pooling Across Subnets
Description: ICA Connection licenses for client connections
can be pooled across the entire server farm regardless of
whether the server farm crosses network subnets.
Benefit: Enables pooling of ICA connection licenses across
the extended enterprise network within the MetaFrame Server
Farm.
Data Center 1
Data Center 2
Subnet 10.1.X.X
Subnet 10.2.X.X
Data Center 3
Subnet 10.3.X.X
Only 1 ICA License
Is utilized
License Fail Over and Redundancy
Description: MetaFrame XP allows licenses to be available
for fail over and redundancy. In the case of a server failure
the server and connection license are still available.
Benefit: Administrators have the ability to bring up “warm”
backup servers in the event of a server failure without the
need to re-install and re-activate their server licenses.
MetaFrame XP License Management
Centralized License Management
Flexible licensing for emerging business models

Increased flexibility to support Citrix Licensing Programs (Shrink
Wrap, CLP, ELP, and iLicense).
Single point of license installation and activation

License installation and activation can be done centrally via the
Citrix Management Console.
Support for multiple server/product platforms

The new licensing system supports all MetaFrame XP server and
connection licenses.
Active Directory Support
Application Publishing

Enables application publishing to users and groups in Active
Directory.
Account Authority Access

Utilizes native Active Directory Interfaces to access the Active
Directory.
User Principal Names

Allows users to logon to the MetaFrame XP server using User
Principal Names: i.e. [email protected].
NFuse and Program Neighborhood

Enables users to utilize their Active Directory accounts to access
MetaFrame XP applications via NFuse and Program Neighborhood.
Active Directory Support
Description: Applications can be published on MetaFrame
XP servers and assigned to users from Active Directory.
Benefit: MetaFrame XP integrates and fully supports
Application Publishing in a native Active Directory
environment.
Publish
Application
for ADS Users
from the CMC
ADS
Enumerate User and
Group Accounts
from Active Directory
Citrix Management
Console
Printer Management
Printer Management

Printer Driver Replication

Printer Mapping

Network Printer Auto-Creation

Printer Compatibility

Printer Bandwidth Control

Terminal Printer Auto-Creation

Client Printer Creation Logging
Printer Management
Printer Mapping

Ability to create mappings for Windows 9X client printers on the
MetaFrame XP server and automatically distribute to the server
farm.
Printer Bandwidth Control


Allows the administrator to specify the amount of bandwidth that
can be used by printing over the client connection.
Setting used for all users over all connections for a given
MetaFrame XP server.
Terminal Printer Auto-Creation

Ability for the administrator to setup auto-creation of printers for
ICA DOS and WinCE Terminal Devices.
Client Printer Creation Logging

Logs all information about auto-creation of client printers, allowing
the administrator to proactively detect printer issues and resolve
them with the required information.
Printer Driver Replication
Description: Through the Citrix Management Console, printer
drivers can be managed across the entire server farm.
Benefit: Gives administrators the power and control to manage and
distribute printer drivers to all of the MetaFrame servers in the server
farm, providing a consistent printing environment for all users.
Install New Printer
Driver on 1 Server
Use CMC to Manage and
Distribute New Printer
Driver to entire Server
Farm
CMC
Printer Driver Mapping
Description: Printer drivers on different platforms often have
differing names which can interfere with client printer creation.
Printer driver mapping enables administrators to control
differing printer drivers.
Benefit: Allows the administrator to specify mappings of
printer driver names from one platform to another (ie.
Windows 95 to Windows 2000)
Printer Compatibility
Description: Printer compatibility allows the administrator to specify
client printers that can be used in the MetaFrame environment or
specify printers that can never be used.
Benefit: Gives administrators the power to determine and control
the types of client printers that can be utilized on the MetaFrame XP
servers enabling them to ensure a consistent and stable computing
environment.
ICA Client
Printer is on the
restricted list
ABC Printer
Printer
Creation is
Disabled
ICA Connection
Restricted
•XYZ Printer
•ABC Printer
Printer Bandwidth Control
Description: Bandwidth limits can be specified for printing
from an ICA Client.
Benefit: Allows the administrator to control and specify the
amount of bandwidth that can be used for printing in the
MetaFrame XP server farm.
Terminal Printer Auto-Creation
Description: Printers connected to ICA DOS and WinCE
terminal devices can be pre-defined for auto-creation from the
CMC.
Benefit: When users login to MetaFrame from the Terminal
devices, the pre-defined printer will be auto-created without
any user interaction.
Client Printer Creation Logging
Description: All information related to client printer creation
is logged in the system event log.
Benefit: Gives administrators the power and information to
proactively detect and resolve client printer issues.
Printer Management Recommendations
Recommendations




Printer drivers can only be replicated to the servers of the same OS
as the source server.
Install drivers on the source server and select any available port on
the server.
If installing for the sole purpose of replication there is no need to
share the printers or set them as default.
Can be very CPU intensive on the source server so avoid
replicating drivers while the source server has a heavy load.
Printer Queue Management

#QueueEntries = [#Drivers] * [#Servers]



Every driver/server combination creates a queue item in the
printer replication queue.

Should not exceed 1500 entries in length.

Eg. 30 drivers to 50 servers
QPRINTER Utility

Not installed by default.

\support\debug\i386

QPRINTER /REPLICA
Expected Performance

Handled by IMA Service at very low priority.

Depends on network traffic and server load.
Shadowing Enhancements
Shadowing Installation Option



Ability to select whether shadowing is
available.
Lock down the shadowing configuration to
avoid changes.
Allows administrators flexibility with privacy
and security issues involving shadowing.
Shadow Indicator


Notifies users that shadowing is in progress.
Provides users with a “cancel” button to end
the shadow.
Shadow Activity Logging


Logs all session and user information during a
shadow.
Enables the creation of a shadow “audit log”.
ICA Client Enhancements
Published Application Parameter Support

Enables the MetaFrame server to accept published application
parameters provided by a client, and the client to pass published
application parameters to the server.
ICA Client Object Interface

A framework that exposes the functionality of the Citrix ICA Win32
Client to other objects or applications. Allows any application that
supports embedding of objects, to interface with and pass
instructions to the ICA Client.
Per Session Time Zone Support

Ability to run applications on the MetaFrame server in the context
of the users local time zone. The MetaFrame Server can support
different users running applications at different time zones on the
same server.
ICA Client Object
Provides a programmable interface for
integration of ICA Clients into

Portals

Dashboards

Vertical market applications, etc.
Supports major web browsers

Internet Explorer 4.0 and greater

Netscape 4 and greater
Supports ActiveX “containers”

MS Office, MS Visual Studio tools, Borland Delphi, etc.
Per Session Time Zone Support
Redmond
Time Zone
GMT -8
ICA Client
Ft. Lauderdale
Time Zone
GMT -7
MetaFrame XP Server
Salt Lake City
Time Zone
GMT -5
ICA Client
Published Applications
Run in context of
User’s Local Time Zone
London
Time Zone
GMT
ICA Client
NFuse Ready
NFuse Ready

NFuse now integrated into MetaFrame XP.

NFuse install option if IIS detected.

Sets up default web and startup page.

In short—you can now ACCIDENTLY deploy NFuse!
MetaFrame XP
Advanced
Management
Load
Management
Load Management (MetaFrame XPa and XPe)
Load Management

Configuration of application load balancing.

Monitoring of application and server load.

Dynamic adjustment of load balancing criteria.

Citrix Load Management replaces Load Balancing Services in
MetaFrame 1.8.

Load Management utilizes IMA for communication.

Provides the ability to create criteria for servers and applications.

Load Management code built into MetaFrame XPa/XPe (no separate
CD-ROM).
Major Components
Major Components of Load Management


Rules

Measure statistics for high or low loads on servers.

Lower a rule, reach a threshold more quickly.

Elevate a rule, a threshold is hard to reach.
Load Evaluators

Used to configure server load measurements.

Use Default for Citrix provided load evaluators.

Use Advanced to create your own.

Can vary on each server and/or application.

Can use any combination of rule and load evaluators per server
across the farm.
Load Management Criteria
Load Management Criteria
CPU Utilization
Memory Usage
Page Swap
Page Fault
Server User Load
IP Range New
Scheduling New
Context Switches New
Disk Data I/O New
License Threshold New
Application User Load New
Disk Operations New
Load Management Criteria
IP Range

Using the IP Range rule, an administrator can specify a distinct
address or set of addresses that can access the published
application.
Scheduling

Using the Scheduling rule, administrators can create a Load
Evaluator that allows access to a specific application or server only
during specified days and times.
Load Monitoring
Load Monitoring

Load management provides monitoring capabilities that allow
extended analysis of how load evaluation criteria is affected in the
enterprise. With monitoring capabilities and trend graphs,
evaluation criteria can be monitored and adjusted over time.
Pre-Configured Load Evaluators
Default


Rule represents the number of users logged onto a MetaFrame XP
server.
Contains one rule, Server User Load, that reports a full load when
100 users log on to the attached server.
Advanced

The rules in this load evaluator represent server performance
using:

Disk I/O

CPU Utilization

Disk Operation

Memory Usage
System
Monitoring and
Analysis
System Monitoring & Analysis (MetaFrame XPe)
System Monitoring & Analysis

Uses utility called “Resource Manager.”

Ground up re-write!

Integrates with the Citrix Management Console.

Adds tabs to the CMC.

Can control summary data in the CMC.

Configure Alert recipients in the CMC.

Adds counters to each server for monitoring, can manage several
servers in the CMC.

Monitors application usage by published applications.

Watcher Window requires the CMC to monitor servers.

Located on “System Monitoring & Analysis” CD-ROM.
Feature Categories
Real-time server monitoring

Receive real-time notification of server problems such as memory
shortage, hard disk space or CPU utilization.
Real-time application monitoring

See at a glance exactly how many application licenses are being
used farm wide.
Alerting

Receive notification via the CMC, SMS message to mobile phones,
SNMP or e-mail.
Commonly Used Terms




Local database: A database created on every MetaFrame XPe
server for storing the real-time information
Farm metric server: Interprets farm-wide data and deals with alerts
Metric: A trackable item that Resourse Manager measures for
servers or applications (e.g., memory available bytes)
Resource Manager Application: An application which may or may
not have been published by MetaFrame, but which you have set up
to be monitored by MetaFrame
CMC Integration
Description. System Monitoring and Analysis is integrated
into the Citrix Management Console.
Benefit. Provides a single point of control for monitoring and
managing your application serving environment.
Application Server Farm Monitoring
Description. Track applications to determine when new
licenses are needed. Configure options for servers—either
individually or farm-wide—to trigger alarms when various
thresholds are reached.
Benefit. Monitors applications and server farms as the
enterprise grows.
Real-time Graphing and Alerting
Description. Monitor the health and performance of
application servers in real time while simultaneously receiving
a wide selection of alerts, including on-screen notifications,
email messages sent to mobile phones and SNMP traps.
Benefit. Detects and resolves potential performance
bottlenecks before they become system failures.
Watcher Window
Description. Monitors parameters through a small watcher
window in the corner of the screen. If an alert is raised,
simply double click on the alert icon to access all relevant
information using the CMC interface.
Benefit. Allows for constant monitoring of a server farm while
working in other applications.
System Scalability
Description. System Monitoring and Analysis is designed to
scale with your MetaFrame XPe environment.
Benefit. Expanded management as servers and server farms
grow.
Intuitive User Interface
Description. Click on an application in the CMC to bring up a
menu of functions, including snooze, sleep, real-time graph,
properties and alarm options. Add and remove alarms with a
few clicks of the mouse.
Benefit. Allows easy and quick set up of alarms and monitors
performance by application.
Simplified Setup
Description. No database setup required—works out of the
box with MSDE.
Benefit. Simplifies installation and eliminates errors with
creating a database. Allows instant access to all features
across the environment.
Server and Application Groups
Description. Create server or application groups consisting
of specific users by department or location.
Benefit. Ease viewing and management of many servers.
Server Reboot Support
Feature Description. All managed servers can be scheduled
to reboot at specific times.
Benefit. Eliminate the tedious, time-consuming task of
manual reboots.
Application
Packaging and
Delivery
Application Packaging & Delivery
(MetaFrame XPe)
Application Packaging & Delivery

Uses utility called “Installation Manager.”

Nearly a ground up re-write!

Integrates with the CMC.

Configure Network account to be used by the installer service to
install packages.

Can select to reboot servers post installation.

Define how often to expire and remove “jobs”.

Define server groups and application packages.

Status can be checked in Job properties.

Located on “Application Packaging & Delivery” CD-ROM.
How Does It Work?
Package, Deliver, and Publish




Using the Packager, software replication packages are
automatically created and prepared for distribution.
Packages are then scheduled for delivery to the targeted servers
via the Citrix Management Console.
Once delivered, applications can be published to provide instant
access through Citrix Program Neighborhood and NFuse.
Applications can also be automatically uninstalled with a few clicks
of the mouse.
Installation Management
Key Elements


Packaging. Configure once, automatically deliver to all – fast and
accurately.
Delivery. Choose the where, when and how of delivery and
installation with complete confidence. And, verify the results, too.
Application Packaging
Description. Include changes to applications in an
installation script that can be automatically replicated on
multiple application servers. This includes unattended
installations where there is no need for application recording.
Benefit. Improves productivity by reducing the time and
effort to manage change in the application server
environment.
Service Pack Packaging
Description. Facilitates maintenance of application serving
environments by enabling the packaging, delivery and
installation of service packs and patches.
Benefit. Maintain applications and distribution of service
packs and patches quickly and easily with central
management, reducing need for additional IT staff.
File Packaging
Description. Packages individual files or groups of files for
distribution.
Benefit. Enables administrators to distribute company
templates and documents associated with a particular
application. They can also use this feature to distribute a
system profile to be used by everyone accessing the
application.
Improved User Interface
Description. Provides a greater degree of control over the
installation package, plus more feedback about the status of
the application being delivered.
Benefit. Gain and enhance user experience and additional
feedback about the status of the application being delivered.
Project Details
Description. View all project settings, including file details,
registry changes, and target directories within the project.
Benefit. Customize and plan during the creation of a
package.
Rollback
Description. Quickly and easily “wipe the slate clean” on the
packaging server following package creation to prepare for
new installations.
Benefit. Spend less time restoring the packaging
environment and more time deploying applications and
supporting users.
CMC Integration
Description. Access Installation Manager from the Citrix
Management Console.
Benefit. Enjoy a single point of control for managing the
entire server farm.
Package Delivery
Description. Deliver a package of applications, files and/or
service packs to multiple servers from a central point in
minutes instead of days or weeks.
Benefit. Save time and improve productivity by ensuring
rapid time-to-value for new or updated applications.
Scheduling
Description. Set up installations to occur automatically
during off-peak hours or on weekends.
Benefit. Conserve bandwidth and minimize user disruption.
Server Groups
Description. Create server groups based on different
categories, such as operating system, geographic location,
department or other user-defined criteria.
Benefit. Precisely target application delivery to the desired
servers.
Server Reboot Support
Description. Support applications that require server
rebooting upon completion of installation. Users connected to
the application server will automatically be notified prior to
server rebooting.
Benefit. Eliminate the tedious, time-consuming task of
manual reboots.
MSI Support
Description. Deploy any application that provides a
Microsoft Installer Package (MSI) without the need for
repackaging.
Benefit. Enjoy all the benefits of this common, industry
standard such as, self-healing, install on demand and DLL
resolution.
Delivery Verification
Description. Status of application delivery to target servers
can be easily verified through Installation Manager.
Benefit. Enjoy added confidence when centrally delivering
applications.
Inventory
Description. Allows administrators to easily inventory all
applications delivered to a server using Installation Manager.
Benefit. Simplify the process of tracking software deployed
in large-scale, multi-application environments.
Network
Management
Network Management (MetaFrame XPe)
Network Management

Network Management is an SNMP agent that runs on your
MetaFrame XP servers.

It can be managed with any SNMP management service or utility.

SNMP agent automatically installed with MetaFrame XPe.

Console plug-ins are available for:


Tivoli NetView (v. 5.1.2 and above)

HP OpenView (v6.0 only)
Plug-ins are located on “Network Management” CD-ROM.
How Does It Work?
Simple Network Management Protocol (SNMP)
Allows network devices to be monitored
and managed from a central location
SNMP
Manager
SNMP Managers are applications that collect
SNMP data and receive SNMP events (traps)
Standard SNMP Support
Description. Citrix now supports the most widely used
network management protocol, SNMP
Benefit. Now conveniently integrates with a huge body of
existing software and hardware tools based on SNMP.
SNMP
SNMP
SNMP Management Console
Integration with Market Leaders
Tivoli NetView with the MetaFrame XPe Plug-in
Monitor & Control MetaFrame Servers
Description. Discovers, monitors and controls MetaFrame
XPe servers in single or multiple farms.
Benefit. Conveniently monitor and control common
MetaFrame session and user status information across
multiple farms from a single console.
Monitor & Control MetaFrame Servers
Disconnect session, send message, logoff user,
and reboot server
Migrating to
MetaFrame XP
Why Migrate to MetaFrame XP?

Increased farm scalability and stability

Easier to manage with CMC

Integrated advanced management capabilities

Simplified license management and activation

Printer management

Enhanced NFuse integration

Active Directory User Principal Name support

Client time zone support

Less server-to-server network traffic

MetaFrame 1.8 and Feature Release 1 enhancements integrated and
available to more clients
Mixed Mode Is…
Mixed Mode is designed to facilitate migration to
MetaFrame XP with little or no end user
disruption
Provides support for:

Published application migration

Application load balancing

Subnet license pooling

Existing NFuse, Program Neighborhood, and Custom ICA
connections
Mixed Mode Is Not…
Mixed Mode is NOT designed to be a permanent
solution
Interoperability is achieved by emulating the
services and communication mechanisms used
by MetaFrame 1.8
Mixed Mode – Architecture Comparison
MetaFrame 1.8
Other
PN Srvrs
NFuse
(named pipe)
HTTP
HTTP &
XML
ICA Client
XML Svc
PN Named
Pipe
PN Virtual
Channel
UDP 1604
QServer
UDP 1604
PN Svc
ICA Browser
Svc
Local
NT
Registry
AppCfg
MFAdmin
WinstationAPI
(RPC)
NT Reg
Termsrv
(remote
regedt)
Mixed Mode – Architecture Comparison
MetaFrame XP
(Native Mode)
Other
IMA Srvrs
NFuse
(TCP)
HTTP
HTTP &
XML
ICA Client
XML Svc
TCP
PN Virtual
Channel
CMC
IMA
Svc
TCP
Mixed Mode – Architecture Comparison
MetaFrame XP
(Interoperability Mode)
NFuse
Other
IMA Srvrs
Other
PN Srvrs
(TCP)
(named pipe)
HTTP
HTTP &
XML
ICA Client
XML Svc
PN Named
Pipe
PN Virtual
Channel
UDP 1604
QServer
UDP 1604
PN Svc
ICA Browser
Svc
Local
NT
Registry
AppCfg
MFAdmin
WinstationAPI
(RPC)
NT Reg
Termsrv
(remote
regedt)
IMA
Svc
Mixed Mode
Until you get to Native Mode, you can’t take full
advantage of:

Increased farm scalability and stability

Advanced printer management

Active Directory UPN support

Simplified license management and activation



MetaFrame 1.8 license gateways are not supported.
MetaFrame connection licenses are equally distributed among
subnets.
CMC/Farm/Properties/Interoperability can change licenses
assigned to each subnet.
Mixed Mode
Running in Mixed Mode

On first MetaFrame XP install, if MetaFrame 1.8 is detected on the
segment, it will offer to run in Mixed Mode.


Administrators must use two sets of tools to manage a mixed farm.


If yes, legacy tools are automatically installed.
appcfg shipped on MetaFrame XP is same as MetaFrame 1.8
SP2. Older versions may not be able to manage applications
published with newer versions.
Applications may be published on MetaFrame 1.8, then MetaFrame
XP--not the reverse.
Migration
Strategies
Migration Strategies – Flash Upgrade
All servers are upgraded to MetaFrame XP during
scheduled network maintenance window


Consider this for highly centralized and/or cloned server
environments.
Citrix now supports both unattended and cloned installs for all but
the first server in a MetaFrame XP farm. See specific
documentation in Admin Guide.

Note: Repeated licenses will give an error upon migration to
IMA Data Store.
Migration Strategies – Parallel
MetaFrame XP servers built in native mode.
MetaFrame 1.8 and XP servers do not
communicate with each other.
Consider this for fast growing installs, new
Windows 2000 rollouts, or multi-site scenarios:




Requires additional hardware and licenses.
Alternately, users may be manually migrated in proportion to
servers.
MetaFrame XP apps are published manually rather than migrated.
Publish MetaFrame XP and 1.8 apps to distinct user groups to
prevent redundant icons.
Migration Strategies – Mixed Mode
Rolling upgrade of existing MetaFrame servers

Set during install of first server in the farm.

MetaFrame XP and 1.8 farm names must match.

MetaFrame XP server will win ICA browser election.

(except MetaFrame 1.8 SP1 MB hardcode)

Mixed Mode applies to all MetaFrame XP servers in the farm.

Starts PN and ICA browser services on MetaFrame XP servers.

Existing apps are migrated to IMA data store (1 time).

Any appcfg changes made to MetaFrame 1.8 apps after
migration are not updated to the data store.
NFuse as a Bridging Technology
NFuse allows administrators to hide complexity
from the end user. For migration, it can be used
to present applications from an arbitrary number
of farms.

MetaFrame XP for Windows

MetaFrame 1.8 for Windows

MetaFrame 1.1 for UNIX

Multiple sites
Migration
Scenarios
Scenario 1: Single Site, Single Farm
Migration
High level steps (‘rolling’ upgrade):
1. Create IMA Data Store if necessary.
2. Upgrade a MetaFrame 1.8 server other than the ICA Master
Browser.

Install in Interoperability Mode when prompted.

Apply upgrade licenses to MetaFrame XP farm.
3. Upgrade remaining servers.
4. Switch to MetaFrame XP Native Mode.
5. Enable/disable UDP browsing as needed.
Scenario 1: Single Site, Single Farm
Migration
Single farm migration tips:


Avoid publishing new apps or changing app configuration while in
MetaFrame XP Interoperability Mode. If necessary, create/modify
apps in MetaFrame1.8 first, then MetaFrame XP.
Use NFuse and/or auto-client update to distribute new ICA clients.
Scenario 2: Multi-Farm Consolidation
High level steps:




Ensure IMA server-to-server communication (default TCP 2512).
Upgrade first farm (including switch to Native Mode) or build new
Enterprise MetaFrame XP farm (in Native Mode).
Perform upgrades of other MetaFrame 1.8 servers (one farm at a
time) joining them to the ‘Master’ MetaFrame XP farm.
Some manual cleanup of duplicate app names may be necessary.
Scenario 2: Multi-Farm Consolidation
Multi-farm consolidation tips:

Key: managing user connectivity





If possible, use an NFuse portal pointing to multiple farms.
NFuse can play a HUGE role here!
If using PN, add/change Application Set objects and server
location/browser type.
If using single published app, may need to modify server
location/browser type.
If using ICA file(s), may need to modify server location/browser
type.
Use NFuse and/or auto-client update to distribute new clients.
Useful Command
Line Utilities
Useful Command Line Utilities


QUERY FARM (QFARM, replaces QSERVER)

/APP Display app names and server load.

/DISC Display disconnected session data.

/LOAD Display server load.

/PROCESS Display active processes.

/ADDR Display address data on selected server.

/TCP, /IPX, /NETBIOS Display protocol data.
CLICENSE.EXE: Built in, useful for querying licensing information
on the farm.

Add_and_activiate

Enumerate

In_use

Servers_using
Useful Command Line Utilities

QUERYHR.EXE: From Support directory on MetaFrame XP CD,
useful for querying zone/DC info on the farm.

-z Show all the available zones

-h <zone name> Show all the hosts in a specified zone

-l Show the Local Host Record

-n <host name> Show the specified Host Record given a host
name

-I <Host ID> Show the specified Host Record given a host ID

-N Show the farm name

-d <Host ID> Delete an IMA Host Entry
Useful Command Line Utilities


QUERYDS.EXE: From Support directory on MetaFrame XP CD,
useful in determining what servers are currently alive in a server
farm.
Usage:


Queryds /table:<tablename>[/query:<querystring>]
Table names:

SubscriptionTable

ServiceTable

PN_Table

Conn_Sessions

Disc_Sessions
Useful Command Line Utilities


QUERYDC.EXE: From Support directory on MetaFrame XP CD,
useful for querying DC info and forcing ‘elections’.

-z <zone name> Show Data Collector name

-e Force Election

-a Show data collectors for all zones
QPRINTER.EXE: From Support directory on MetaFrame XP CD,
useful for viewing printer replication queue and importing mapping
files into the DS.

/REPLICA Display info about printer replication queue

/IMAPRMAPPING <file name> Import mapping file into DS.