Kein Folientitel - Storr Consulting
Download
Report
Transcript Kein Folientitel - Storr Consulting
in a banking data center from the past to the future
Bernd Bohne
Sparda-Datenverarbeitung e.G.
SAG UG NA - Conference Philadelphia 10/2006
sponsored by
Sparda-Datenverarbeitung eG
sponsored by
Sparda-Datenverarbeitung eG
oriented on the German market
offers it‘s service to
28 Bank-Companies with
7900 Employees and a total
Balance of 66.2 Bn. €
23,600,000 Accounts
5,000,000 Customers
sponsored by
ADABAS was introduced in 1987 at SDV:
A new core core-banking application was introduced
Batch-timeframe was to small
VSAM-Files needs to be closed in Online (CICS req.)
Batch didn‘t fit in ...
Weekends have been very short in those days...
At this time there was only a single bank-corporation in
the ‚data-center‘
Early shortcomings have been ironed out and
incremental progress was done to harden the application
sponsored by
The next years have been very busy...
The nagging question remains:
How to deal with big bank companies?
Some of the tension disappears by implementing
ADABAS.
Thought the core-banking application was VSAM based?
SAG offers a migration tool, called ADABAS VSAM
bridge
via SVC-screening requests have been converted to
ADABAS-calls
sponsored by
ADABAS VSAM Bridge (AVB):
SVC 19 ‚open‘ & SVC 20 ‚close‘
at CICS-Startup PLT and
CICS-Shut Down
Batch
SVC-Screening
CICS
AVB Transparency Table
VSAM-File
ADABAS-File
sponsored by
Results of this move in general?
Batch could run in parallel to CICS
Datamodel could remain unchanged
Restart informations could be used („ET-Data“)
sponsored by
What does it mean to SDV?
Time to fix the ‚real‘ problems in batch-processing
The new option was SQL available (‚native SQL‘)
New candidates have been in the queue to join SDV
4 Bavarian banks joined the data-center
not only hosting, it was a migration process at the bankcompanies of the whole banking application
The German reunion happened in those days
one bank, covering the whole market of east germany of this
banking-group, joined SDV
Several batch-programs must be optimized - using
SQL was crucial
sponsored by
To put it in a nutshell:
There was no time to take care of other things:
Performance was again a big issue!
The really important problems must be solved,
some problems seemed to be similar as before
Adabas has been implemented.
Get, Put, Update and Delete havn‘t been the real McCoy - but
that‘s what VSAM offers at all and also the AVB
(converting the VSAM-commands to L3, N1, A1 and E1).
It was time to start digging deeper: Adabas native SQL
seems to be the key to more performance!
sponsored by
A new era: Adabas native SQL
L9 (‚histogram‘), S1 (‚find‘) etc. offered new, more
efficient options of processing huge files
on the other hand side: There was always a ‚dataaccessing-layer‘ in the application programs but with
Adabas-SQL the statements have been implemented
in the normal program-code
the co-existence of the Adabas-VSAM-Bridge and SQL
in a program (here in terms: CICS-transation) could
lead to side-effects
... but anyway, pressing business and besides there
are new bank-companies in the queue to join SDV
sponsored by
1:30
What‘s about Adabas transactions in CICS?
EXEC CICS
SYNCPOINT
(CICS) DFHSPP
EXEC CICS
SYNCPOINT ROLLBACK
Syncpointprocessing
was realized via AVBCode!
(CICS) DFHSPP
AVB ‚SPP‘
AVB ‚SPP‘
ET
Adabas-transaction 1
BT
Adabas-transaction 2
There was a very close link between CICS and Adabas using the
Adabas VSAM Bridge!
sponsored by
Dawn of realization: There are limits in VSE
More bank bank-companies intends to join SDV
Logically VSE and VM seemed to be no longer the
right platform
A migration project started to migrate to MVS
There was no time to fix additional things:
The migration was again an exhausting task!
sponsored by
The year of consolidation 1996...
After such a long period of time, there was the year of
consolidation announced...!
This turned to be one of the most busiest years in
history of the company:
2 bank-companies asked to join SDV - who wants to say no!
But anyway, there‘s still a light at the horizon:
Let‘s do it next year!
A complete new banking-group decided in favour of
SDV...
sponsored by
Automatic Teller Machine technology requires to consolidate
files:
Bank A
database A
Bank B
database B
Bank C
database C
Each bank-company was from the infrastructure point sitting in it‘s own isolated environment!
Advantage and disadvantage: Sharing no data!
sponsored by
ATM technology requires to consolidate files...
Bank A
DB A
Bank B
DB B
Bank C
DB C
?
?
?
Copy
ATM
File
To copy every day specific files,
does this really make sense?
sponsored by
ATM technology requires to consolidate files...
Bank A
DB A
Bank B
DB B
Bank C
DB C
DB X
ATM
Copy File
sponsored by
By the way: ATM means 24 by 7 Project ‚Sysplex 2000‘
A 2nd Data Center was implemented
DASD mirroring (‚PPRC‘) established
Network-components like IP have been separated in 2
‚Net-LPars‘ and two production LPars established
The whole System architecture was re-engineered
Only redundancy could meet the much higher
requirements of our customers
Also at these days: The 1st gateways were established
to the Internet.
sponsored by
The ‚big picture‘:
Internet
WebSphere (Blades/Linux)
CICS Transaction Gateway
Routing CICS
CICS Bank A CICS Bank B CICS Bank C
DB A
DB B
DB C
A weak point:
‚Single point
of failure‘
DB X
sponsored by
A new ATM application was implemented:
ATM-front-end (CICS-/DB2-based) was implemented
DB2: Isolated in a ‚small‘ LPar after changing to WLC
SW-Pricing (saved approx. 10k € per month)
This step was also necessary to bring DB2 with
datasharing in an affordable cost-range
Front-end and core-banking-application have been
linked via DPL
sponsored by
And the application? The ‚MBS-new‘ project started!
Core banking application ‚modular banking system‘
was redesigned
Data access layer was new established (incl. native SQL)
Data access was in ‚server-modules‘ file-related established
XML-Interface was established
CICS-getmain-shared removed (CICSplex...!!!)
AVB removed (how do you synchronize?)
New requirement:
Why is it not possible to update more than one database?
(‚Modification-requests‘ have been rejected by user-exit at
the link-interface‘).
sponsored by
Synchronization and Two-phase-commit:
The synchronization of the AVB was replaced by using
the ADABAS Resource Manager Interface (RMI) via
Task Related UserExit (TRUE).
To allow update on multiple ADABAS databases
means you need two-phase-commit, and this means
Adabas Transaction Manager (ATM)
The Version 1 had typical 1st generation shortcomings,
nowadays, Version 7.5, at SDV currently 7.5.1 is
running stabil and with very good performance.
But, step-by-step the picture became more
sophisticated...
sponsored by
1:45
Overview ADABAS, AFP/COR and ATM:
Appl.-LPar P82
Appl.-LPar P96
Data
Center 1
AFP82 / COR*
CICS
82
ADA
82
ATM: P582ATM
Data
Center 2
AFP96 / COR*
N
E
T
W
O
R
K
82
XCF
N
E
T
W
O
R
K
96
CICS
96
ADA
96
ATM: P596ATM
PPRC
*) Daemon-Mode
sponsored by
Centralized databases:
Appl.-LPar P82
Appl.-LPar P96
Data
Center 1
CICS
ADA
Data
Center 2
CICS
ADA
ADA
Currently unfortunatly
a ‚single point of failure‘
sponsored by
Clustered databases in general (next steps):
Appl.-LPar P82
Appl.-LPar P96
Data
Center 1
CICS
Data
Center 2
CICS
..
.
.
.
..
ADA
ADA
ADA
XCF
ADA
The future option to reduce risk by
eleminating the ‚single point of failure‘.
sponsored by
Clustered databases in general and ATM (ATM751):
Appl.-LPar P82
Data
Center 1
Appl.-LPar P96
ATM
82
Data
Center 2
ATM
96
A new
‚single point of failure‘
CICS
ADA
CICS
.
.. ...
XCF
ADA
The ATM 751 is clearly a new single point of failure:
There are two linked ATMs but each one is unique because it‘s ‚LPar-bound‘.
sponsored by
Clustered databases in general and ATM (V8/CE/future?):
Appl.-LPar P82
Data
Center 1
Appl.-LPar P96
Clustered ATM*
ATM
CICS
ADA
Data
Center 2
ATM
CICS
..
.
.
.
..
XCF
ADA
The next step in the evolution should be to create an ‚ATM-cluster‘ in full datasharing mode with
automatic failover in case of an outage of one of the ATM-tasks.
sponsored by
Current Infrastructure (Mainframe)
2x z9 (IBM 2094-705) - each 64 GB
2x z890 (IBM 2086 / 2 Engines CF / each 16 GB)
Coupling Links (XCF): ISC-D (Card) and ISC-3 (Port)
2x HDS USP (USP600 & USP100, each partitioned)
2x HDS 9970 („MF only“)
DASD: Direct attached Ficon
OS: zOS 1.8 and zVM 5.2
CF: CC14
Tape: STK SL8500/9940B, STK 9310 („Powderhorn“)/
9840 both at each DataCenter, Titanium 10,000 and
FSC CentricStor
sponsored by
Current Infrastructure XCF/Coupling Links:
DataCenter 2
DataCenter 1
IBM z890
ISC-D / ISC-3 (Fiber)
IBM z890
(8x)
(6x)
(6x)
(6x)
(6x)
1.6 km
IBM z9
IBM z9
sponsored by
Level and Configuration ADABAS and Add On:
ADABAS:
ADA743 + LX08 (ATM)
ADABAS Fastpath:
AFP742 (Global Buffer at both DataCenter)
ADABAS Transaction Manager:
ATM751 (incl. CICS-RMI)
ADABAS System Coordinator
COR742 (Daemon Mode)
sponsored by
XCF - Request Time (Hardware):
Started with „shared“ CPUs: 300 - 400 micSec.*
„dedicated“ CPUs: 20 - 80 micSec.*
Influence of ISC-D and ISC-3 (interface/port)
*) Average ‚over-all‘, not ADABAS DS-specific - env. GRS Star, CICS TS 2.3, z/OS 1.6
sponsored by
Our current Status (‚ADABAS datasharing‘):
Since end 06/2006 we have all ADABAS env. in testsysplex in a datasharing configuration
We started with „minor-important“ environments in
production in 07/2006 with ALS
If no further obstacles appear:
We will finish in 2006 with the rollout in production,
latest end 2nd quarter 2007
Currently we see no influence of ALS to AFP/COR,
ATM (necessary configuration changes)
sponsored by
Stresstest CoreBanking Application with datasharing:
Nov. 18th, 2005
Response of file-requests behind expectations reason: shared CPUs (see „XCF request time“)
CICS-Txn-Rate 4.000 - 5.000 Txns per min. current max. is up to 3.000 Txns per min. each CICS
„workload driver“ could not generate needed peak
No outages or malfunctions have been detected in
SAG components!
sponsored by
2:00
Experiences with ADABAS Cluster Services:
„Frame-Work“
How to control two or more DBMS for one database
(„System Automation“)
You need more DASD-Space for the DBMS and PLog
(2PC: Work-Part-4)
A minimum of 1 XCF is recommended
(but: 2 XCF were in our case cheaper)
Technological backlevel of XCF is high likely
(... or you modernize the „Application“-MF and XCF at the same time costs/budget!?)
sponsored by
Experiences with ADABAS Cluster Services:
ADABAS
dedicated XCF-CPUs are crucial for datasharing in general
really needed Storage in the XCF is currently not clear
Usage of the fastest available Coupling-Links recommended
Changed PLog-Processing (PLog-Merge, PLog-Merge DS)
detected several times specific changes (maintenance) ALS
(currently several ADABAS LMODs in LX08-Load-Library)
Currently no common view for Review, AOS, ASF and CLog
(in case of CLog: Merge necessary)
sponsored by
Modernization of the Application:
CICS
Core
Core Banking
Banking
Application
Application
XMLInterface
WebServices
Data Access Layer (SQL)
sponsored by
Modernization of the Application: Establish CICSplex
Net.-LPar P94
Net.-LPar P96
IP
TOR/CICS
Data
Center 1
IP
CPSM
CPSM
Appl.-LPar P82
Appl.-LPar P96
ATM
82
AOR/CICS
ADA
TOR/CICS
Data
Center 2
ATM
96
AOR/CICS
.
.. ...
XCF
ADA
sponsored by
Infrastructure: Full Clustered Environment
Net.-LPar P94
Net.-LPar P96
IP
TOR/CICS
Data
Center 1
IP
CPSM
CPSM
Appl.-LPar P82
Appl.-LPar P96
ATM
AOR/CICS
ADA
TOR/CICS
Data
Center 2
ATM
AOR/CICS
.
.. ...
XCF
ADA
sponsored by
CICSplex:
Remove application system affinities
Share common data outside CICS
CICS File Owning Region or ‚shared database‘
Establish dynamic routing
CPSM CICSplex System Manager
including Workload Balancing
sponsored by
Business
Continuity: Freeze and Hiperswap (GDPS*):
*)Geographically Dispersed Parallel Sysplex
Net.-LPar P94
Net.-LPar P96
IP
TOR/CICS
Data
Center 1
IP
CPSM
CPSM
Appl.-LPar P82
Appl.-LPar P96
ATM
AOR/CICS
ADA
.
.. ...
TOR/CICS
ATM
Data
Center 2
1. Freeze
(suspend I/Os)
2. Hiperswap
(UCB Switch)
AOR/CICS
XCF
XX
ADA
sponsored by
WebSphere / CTG / CICS:
WebSphere Portal
Env.: Linux on Blade
CTG
Sniffing
impossible:
Connected
via
Hipersocket
(CICS Transaction Gateway)
Routing
CICS
Application
CICS
Env.: Linux on System z
Env.: z/OS on System z
Env.: z/OS on System z
sponsored by
2:15
Linux in a high availability environment
Started in 2004 to consolidate Linux Server on MF
OS: z/VM (currently z/VM 5.2)
Hosting
Tivoli Storage Manager (TSM)
Database Server (DB2 UDB and Oracle)
CICS Transaction Gateway
Currently 2 x 2 IFL*
Systemautomation for Multiplatform (SAfMP)
Contains Agent to plug z/VM including Linux in GDPS (Freeze/Hiperswap)
sponsored by
Overview Linux on z
Production: 13 Server
Integration - Q&A: 9 Server
Test and Education: 5 Server
Advantages:
• Consolidation in general (Administration)
• Higher Usage of installed resources
• PPRC-Protection (Business Continuity)
• DR (CBU Capacity Backup and GDPS)
• significant savings regarding license-fee/S&S
sponsored by
Our Goal: As close as possible to 100 % Availability*
Application
CICSplex dynamic Routing/
Processing (DTR)
Data
Desaster Recovery
Datasharing
Freeze &
Hyperswap
(GDPS)
*) to deal with planned / unplanned outages and flexible workloadbalancing
sponsored by
Questions?
sponsored by