Primary Storage Manager

Download Report

Transcript Primary Storage Manager

Simply Powerful – Informix 12.10.xC1 thru xC4
Scott Pickett
WW Informix Technical Sales
For questions about this presentation, email to: [email protected]
© 2014 IBM Corporation
Forward
 The multitenancy portion of this presentation is thru 12.10.xC5, it did
make sense to split multitenancy into two different presentations.
 The Time Series and JSON stories are quite large for 12.10 and as
such are split off separately in their own presentations.
 Pricing and Packaging will be found in its own presentation on the
CTP web site.
 As of June 23, 2016, the current release is 12.10.xC7
– Next release is 12.10.xC8 and is due November 2016.
– There are sub-releases of 12.10.xC7 due in September 2016.
• These will have Encryption at Rest as a main feature
2
© 2014 IBM Corporation
Agenda










3
Autonomics & Server Admin. Improvements
Primary Storage Manager
Multitenancy
REST
Table Compression features
Index Compression features
New SQL
Informix Warehouse Accelerator
HA, ER, SDS, Flexible Grid, Connection Manager
Spatial Enhancements
© 2014 IBM Corporation
Informix Dynamic Server - Autonomic Tuning &
Server Admin Improvements
Scott Pickett
WW Informix Technical Sales
For questions about this presentation contact: [email protected]
© 2014 IBM Corporation
Overview
 Problem Statement:
– We want the Informix database server to be as invisible as possible.
– After the initial install of IDS and uptime, we need the server to be able to adapt
to any workload that comes its way.
– Prior to 12.10xC2, several key server resources were not dynamic
• This could and did lead to artificial performance ceilings, and an increase in downtime
to fix, or degradation in performance until the issue was detected by human interface.

Not everyone knows how to detect ……. and if they do, getting the downtime from
management is an issue …. always.
 The improvements are in several different categories…..
5
© 2014 IBM Corporation
Requirement Categories




Physical Log
Logical Log
Bufferpool
CPUs
 These will follow similar rules guiding automatic tuning:
– How to detect when insufficient resources for each are causing a performance
impact
– How to increase the resource with minimal impact to the application
– How to control the increase of the resource so it adapts in a timely manner…
not too much, not too little, just right
6
© 2014 IBM Corporation
Physical Log Expansion
 Purpose:
– To help avoid performance loss associated with an undersized physical log
caused by frequent server checkpoints which can block user transactions,
located within an already constrained storage space with other objects therein,
such as database and server objects.
 Expansion
– We support the changing of the physical log location already.
– We want to make sure there is only sole occupancy in the space for the
physical log; nothing else hindering its successful expansion and requested
size:
• A new single Physical Log Storage Space (PLOGspace) per instance,
• Has a single physical chunk containing only the server physical log with the physical
page size of the host O/S.
 Physical log expansion is triggered by any of the following
– A checkpoint blocked user transactions because the physical log is too small.
– 5 checkpoints triggered by physical log activity:
• Since the last time the physical log was extended.
• And within the last 20 checkpoints.
7
© 2014 IBM Corporation
Physical Log Space Creation – onspaces
 General Syntax:
– onspaces -c –P <name> -p <path> -o <offset> -s <size> [-m <mpath>
<offset>]
 Examples:
– onspaces -c –P plogdbs -p /dev/chk1 -o 0 -s 40000
– onspaces -c –P pdbs1 -p /dev/pchk1 -o 500 -s 60000 -m /dev/mchk1 0
 Note
– If a PLOGspace already exists:
•
•
•
•
8
Creating a new one will create the new PLOGspace,
Move the physical log to that space,
Drop the old PLOGspace,
All of the above done automatically.
© 2014 IBM Corporation
Physical Log Space Creation – SQL Admin API
 Creating a PLOGspace with SQL Admin API
 General Syntax:
– create plogspace, <name>, <path>, <size>, <offset>, <mir path>, <mir
offset>
 Examples:
– execute function task(“create plogspace”,”plogdbs”,”/dev/chk1”,30000,0);
– execute function task(“create
plogspace”,”plogdbs”,”/dev/chk1”,30000,0,”/dev/mchk1”,0);
9
© 2014 IBM Corporation
Physical Log Improvements - onspaces
 Dropping a PLOGspace with onspaces
– The same space-dropping command (onspaces –d) may be used to drop this
new type of space,
– Only if the space is empty (53 pages, oncheck –pe to verify)
 General Syntax:
– onspaces -d <PLOGspace name> [-y]
–
 Example:
– onspaces -d plogdbs -y
10
© 2014 IBM Corporation
Physical Log Improvements – SQL Admin API
 Dropping a PLOGspace with SQL Admin
– One must use a new “drop plogspace” command with SQL Admin API.
• “drop dbspace” will return an error.
– “drop plogspace” takes no arguments, since there can be at most one
PLOGspace in the instance.
• The PLOGspace must be empty to be dropped.
 General Syntax:
– drop plogspace
 Example:
– execute function task(“drop plogspace”);
11
© 2014 IBM Corporation
Warnings on Insufficient PLOGspace
 In the message log file:
– Thru the onstat –m command line utility:
• You will see a message saying “Physical Log too small ……”
 Thru onstat –g ckp utility:
– Look for a warning at the top about the physical log being too small or a
“Trigger” of “Plog” with “Block Time” > 0.0
– The presence of long transaction rollbacks. A “Trigger” of “LongTX” with “Block
Time” > 0.0 can be a too small physical log or too little Logical Log space.
Logical Log Improvements
 Purpose
– To improve performance, having sufficient logical log space helps prevent
frequent checkpoints from blocking transactions and also from having long or
too frequent checkpoints.
– Knowing an appropriate amount of logical log space for a dynamic workload is
difficult to supply in advance without first knowing and estimating the
transaction load.
– Configuring a server with too much logical log space in advance can limit the
out-of-box experience by requiring substantial disk space.
– We will therefore allocate logical log space dynamically as needed and
therefore dynamically tunable.
 Detection
– The best way to detect if the logical log is a bottleneck is to look at the recent
checkpoint activity:
• The server keeps track of the last 20 checkpoints.
• If a substantial portion of the recent checkpoints were triggered because of the logical
log, then we should increase the logical log space by adding another logical log.
• If a long transaction or checkpoint blocking occurs because of the logical log, we
should also add another logical log to the log space.
13
© 2014 IBM Corporation
Logical Log Improvements
 Expansion
– We already have the adding logical logs dynamically feature:
• DYNAMIC_LOGS 2 in the configuration file defined by $ONCONFIG environment
variable.

Typically executes only when the server runs out of logical log space.
• The server already contains the logic to extend a dbspace while adding a logical log.
– There is a new, additional Logical Log expansion parameter in the configuration
file called AUTO_LLOG
• Enables the automatic addition of logical logs when the database server detects that
adding logical logs improves performance.
• If enabled, log expansion occurs:



When 5 of the last 20 checkpoints were caused by logical logs filling up
When a logical log causes a blocking checkpoint
When a logical log causes a long checkpoint
• AUTO_LLOG & DYNAMIC_LOGS do not interact
14
© 2014 IBM Corporation
Logical Log Improvements
 Configuration
– The new AUTO_LLOG configuration parameter specifies
• Dbspace for new logical log files
• Maximum size of all logical log files.
 Estimation of maximum logical log space requirements:
– Depending on the number of concurrent users accessing your database
server…….
•
•
•
•
15
1 - 100:
101 - 500:
501 - 1000:
More than 1000:
200 MB
500 MB
1 GB
2 GB
© 2014 IBM Corporation
AUTO_LLOG – Syntax
 onconfig.std value 0
 By default this is 0, disabled.
 AUTO_LLOG 1, dbspace_name, max_size
– 1 - Add Logical logs when needed to improve performance.
– dbspace_name - dbspace name in which to add logical log files.
• This dbspace must have the default page size for the operating system.
– max_size = Optional. Default is 2048000 KB (2 GB).
• The maximum size in KB of all logical log files, including any logical log files that are
not stored in the dbspace dbspace_name.
• If max_size is not specified, the AUTO_TUNE_SERVER_SIZE configuration
parameter setting affects the maximum size.
– Takes effect:
• After you edit the onconfig file and restart the database server.
• Reset the value dynamically in your onconfig file by running the onmode -wf
command
• Reset the value in memory only by running the onmode -wm command.
16
© 2014 IBM Corporation
AUTO_LLOG – Syntax Notes
 When the maximum size of the logical log files is reached, logical log
files are no longer added to improve performance. However, if the
DYNAMIC_LOGS configuration parameter is enabled, logical logs are
added to prevent transaction blocking.
 The settings of the DYNAMIC_LOGS and the AUTO_LLOG
configuration parameters do not interact.
 If the value of the max_size field is larger than the size of the
specified dbspace, enable automatic expansion of storage spaces.
17
© 2014 IBM Corporation
Warnings on Insufficient Logical Logs
 In the message log file:
– Thru the onstat –m command line utility:
• You may see unusual numbers of logical logs have been added.
 Thru onstat –g ckp utility:
– Look for a “Trigger” of “Llog” with “Block Time” > 0.0
18
© 2014 IBM Corporation
Bufferpool Added Capabilities
 Dynamic Buffer pools have been requested for a while by our users.
 We will therefore add the ability to dynamically extend a buffer pool
and do so automatically, which will allow us to start with relatively
small pools and increase the size of the pools only as needed.
 This will improve performance in many cases by getting the memory
resources necessary to the database server it needs, if available, to
increase application transaction performance much sooner, in near
real-time, rather than after the fact, which has been the norm.
19
© 2014 IBM Corporation
BUFFERPOOL Configuration Parameter Expansion
 Three goals:
– Use a new memory format method of allocating pool memory:
• Specifying buffer pool size in bytes (Kb, Mb, Gb) units.
– Marking the pool as extendable.
– Maintain the legacy format method of BUFFERPOOL for current user
compatibility:
• Specifying buffer pool size and its limits in pagesize units.
 Some BUFFERPOOL arguments are now incompatible with others.
 Must follow rules to properly set this parameter:
– Order of precedence for all BUFFERPOOL definitions:
• All definitions shall have the same format (all legacy or all memory) or the server will
not boot.
• If an argument’s value is not defined, we use the value from the default
BUFFERPOOL setting.
• If the default BUFFERPOOL setting is not defined, use the internal definition for the
format being used.

20
If unable to determine format (ex. BUFFERPOOL size=2k), use internal format based on any
other BUFFERPOOL definitions present. If not, use internal legacy.
© 2014 IBM Corporation
Bufferpool Added Capabilities
 Detection
– We want to add more memory/buffers to a buffer pool only when the working
set doesn’t fit.
– To determine when to extend the buffer pool, we keep track of the cache hit
ratio of each buffer pool by sampling the cache hit ratio once a minute for 5
minutes.
– If the average cache hit ratio is less than a configured threshold, we extend the
buffer pool.
 Extension
– In order to support buffer pool extension, we need to move buffer pools into
their own shared memory segments.
 Shared Memory in Informix can now have 5 possible segments
–
–
–
–
–
21
Resident
Virtual
Message
Bufferpool *** NEW ****
Extended
© 2014 IBM Corporation
Bufferpools
 Bufferpool Segments
– Buffer pools will no longer be allocated within the resident segment.
– We will add a new IDS memory segment class that will contain only buffer pool
memory structures and adhere to the RESIDENT flag.
– The buffer pool class segment will have virtually no overhead (bitmap or TTree)
for maintaining the segment. This will allow for maximum memory usage.
 Bufferpool Extending for Legacy Format
– The BUFFERPOOL configuration parameter’s legacy “buffers” format uses
these attributes:
• buffers – starting number of buffers
• next_buffers – is the number of buffers to add when the buffer pool is extended and
will be doubled every 4th extension *.
• max_extends – maximum number of extensions the buffer pool is allowed
* (See the onstat –g buf slide example further below.)
22
© 2014 IBM Corporation
Bufferpools (cont’d)
 Bufferpool Extending for Memory Format
– The BUFFERPOOL configuration parameter’s new “Memory” format uses
these attributes:
• memory – target amount of memory that a buffer pool can grow to. Values will be
rounded up to 32mb alignments.
• start_memory – starting amount of memory
– Both memory and start_memory values can be expressed in a variety of
units.
– For example:
• BUFFERPOOL size=4k,start_memory=32mb,memory=54gb
• BUFFERPOOL size=8k,start_memory=48000,memory=204mb
23
© 2014 IBM Corporation
BUFFERPOOL Configuration Parameter Expansion
 A BUFFERPOOL definition can NOT include aspects of both formats;
either it’s expressed in legacy format or memory format.
 A BUFFERPOOL definition with mixed formats is rejected and the
database server will not start.
 The charts on the next several slides describe each argument to the
BUFFERPOOL parameter:
– both = works with legacy & memory format.
– legacy & memory are mutually exclusive.
24
© 2014 IBM Corporation
BUFFERPOOL Argument Compatibility Chart
Argument
Format
Dynamic
Values
size
both
no
Required. Default,2k,4k,….
lrus
both
no
Optional
Proposed change internal default from numcpuvps
to 4 * physical processors on the box.
lru_min_dirty
both
Yes, auto
Optional
Internal default = 50
lru_max_dirty
both
Yes, auto
Optional
Internal default = 60
extendable
both
no
Optional
0=FALSE
1=TRUE
Determines if bufferpool is extendable or not.
Internal default: legacy = 0, memory = 1
cache_hit_ratio
both
Yes
Optional
Min=0, Max=100, default = 90
Only applies if bufferpool is extendable
25
© 2014 IBM Corporation
BUFFERPOOL Argument Compatibility Chart
(cont’d)
Argument
Format Dynamic
Values
buffers
legacy
No
# of buffers
Initial # of buffers in 1st segment.
Min=MIN_BUFFERS(1000)
Max=MAX_BUFFERS(based on virtual memory)
Internal default=1000
next_buffers
legacy
Yes, won’t
impact
existing
segments or
growth
projections,
only next
segment
Requires extendable=1, otherwise, ignored.
Min=MIN_BUFFERS(1000)
Max=MAX_BUFFERS(based on virtual memory)
Internal default=1000
Specifies amount of growth the next bufferpool
segment will be.
max_extends
legacy
No
Optional
Requires extendable=1, otherwise, ignored.
Maximum # of segments the bufferpool can expand to
Min=0
Max=MAX_BUFFSEGS (platform specific… 64bit=24,
64bit NT=8, 32bit=16)
Internal Default=8
26
© 2014 IBM Corporation
BUFFERPOOL Argument Compatibility Chart
(cont’d)
Argument
Format
Dynamic
Values
start_memory
memory
No
Optional for new format
determines size of initial bufferpool segment.
Min=32Mb
Max=Amount of free memory
auto=IDS determines starting memory
Internal default=Auto
memory
memory
Yes
Required for new format
Determine max size bufferpool can grow to
Min=32Mb
Max=?
auto=IDS determines maximum amount of memory
to use
Internal default=Auto
27
© 2014 IBM Corporation
BUFFERPOOL examples
 BUFFERPOOL
size=2k,extendable=1,buffers=1000,next_buffers=2000,max_extends=8
– Create a 2k pagesize buffer pool.
– The buffer pool is extendable up to 8 times.
• The 1st 3 extensions will be of 2000 buffers,
• Next 4 will be of 4000 buffers and
• The 8th extension will be of 8000 buffers.
 BUFFERPOOL size=10k,start_memory=auto,memory=4gb
– Create a 10k pagesize buffer pool.
– The buffer pool will be extendable and have a max size of 4gb.
– IDS will determine the initial # of buffers and the extension sizes.
– Note: when using legacy format, it is possible to ask for N buffers and get slightly
more or fewer depending on memory alignment. We allocate extra memory to
allow for alignment and then consume all of the memory allocated.
28
© 2014 IBM Corporation
onstat –p – Monitor your cache hit ratio
 So if the %cached for reads is less that 95% then this is usually a
good bet that you are doing too much disk i/o and not enough reading
from memory, which does affect performance and this is usually
because there are not enough memory buffers to cache all of the data.
29
© 2014 IBM Corporation
onstat –g seg – Memory Segments in Use
 As before, the ‘class’ column indicates the type of memory segment,
the ‘B’ within the column indicating the new ‘Buffer pool’ segment, and
as before: ‘R’ indicating resident and ‘V’ indicating virtual segments.
30
© 2014 IBM Corporation
onstat –g buf – Monitor Buffer Pools – Memory Format
 Produces output
per bufferpool in
use.
 Will report either
format in use.
31
© 2014 IBM Corporation
Onstat
–g buf – Legacy Format
 Really poor read performance here
(89.69%) – ovbuff and Fg Writes have
the same number (24630), indicative that
you were out of buffers in the bufferpool,
in this case, 24,630 times; solution: add
buffers to the bufferpool.
 In addition to the Read Cache triggering
additional BUFFERS, Fg_Writes > 0 will
also automatically trigger additional
BUFFERS.
32
© 2014 IBM Corporation
VPCLASS
 This configuration parameter is now expanded to use with cpu class
to make the cpu class extendable.
 VPCLASS class,aff=range,noage,num=N,autotune,max=N
– autotune is optional.
• When set, IDS will determine when to add another VP of class.
– max is the maximum number of CPU VPs we can extend to.
 When using autotune, affinity will not be used for any cpu vps added
as part of automatic tuning.
 This is again for performance reasons, adding an additional cpu vp
dynamically thru autotune allows more resource availability to make
those transactions complete quicker due to less likelihood of there
being not enough cpu to complete a given operation quickly.
– Manual intervention is likely to be not as quick in many such cases.
33
© 2014 IBM Corporation
CPU VP’s
 Detection
– CPU VPs are added whenever the average ready Q count across all the CPU
VPs for the last minute exceeds 4.
– If the average depth of the ready Q is >= 5, we add another CPU VP up to a
maximum of the number of physical processors on the box.
– Monitoring the ready queue (onstat –g rea, large number of threads sitting
waiting), and the global usage of the CPU virtual processors (onstat –g glo,
the eff column always at 100).
 Configuration
– A new argument to the VPCLASS configuration parameter format has been
added: autotune.
– This argument will be processed only for the CPU and AIO vp classes. For
more info see VPCLASS below.
 Expansion
– Internal procedure to add a cpu vp.
– See Appendix A for Event Alarm relevant to this.
34
© 2014 IBM Corporation
onstat –g rea – Threads Ready to Run
onstat –g rea
IBM Informix Dynamic Server Version 12.10.FC3 -- On-Line -- Up 2 days 23:53:12 -- 240192 Kbytes
Ready threads:
tid tcb
rstcb
prty
status
6
28
33
406464
40a124
409dc4
4
4
2
ready
ready
ready
35
536a38
60cfe8
672a20
vp-class
name
3cpu
1cpu
3cpu
main_loop()
onmode_mon
sqlexec
© 2014 IBM Corporation
onstat –g cpu
36
© 2014 IBM Corporation
onstat –g glo – Monitor your CPU usage
37
© 2014 IBM Corporation
AIO VP with AUTOTUNE
 Use a VPCLASS configuration parameter entry for the AIO virtual
processor class to specify an exact number of AIO virtual processors
or to enable the database server to add AIO virtual processors as
needed.
 Again this being done for performance, it is sometimes hard for a
DBA to know when he might need an AIO VP and where to look to get
this information.
 AIO VP’s are used on operating systems that do not support KAIO,
the database server uses the AIO class of virtual processors to
perform database I/O that is not related to physical or logical logging.
– The database server uses the CPU class to perform KAIO for database I/O
when KAIO is available on a platform.
– If the database server implements KAIO, a KAIO thread performs all I/O to raw
disk space, including I/O to the physical and logical logs.
38
© 2014 IBM Corporation
AUTO_TUNE_SERVER_SIZE
 Configuration parameter used to buffer pool size when automatic
tuning is not possible.
 When BUFFERPOOL:memory is set to auto, we set memory using
AUTO_TUNE_SERVER_SIZE.
– Sets internal structures to assume that’s how much memory will get used.
– Therefore, it is possible to exceed these amounts.
 AUTO_TUNE_SERVER_SIZE [OFF | SMALL | MEDIUM | LARGE |
XLARGE]
–
–
–
–
–
39
OFF
SMALL
MEDIUM
LARGE
XLARGE
use 10% available memory
use 10% available memory
use 20% available memory
use 33% available memory
use 50% available memory
© 2014 IBM Corporation
AUTO_TUNE_SERVER_SIZE
 This is set only when you install the product and request, as part of
the installation process, that a server instance be created:
– It sets the sizes of memory and storage spaces to allocate based on the
number of expected concurrent users provided by the user during the install.
•
•
•
•
SMALL = 1 - 100 users
MEDIUM = 101 - 500 users
LARGE = 501 - 1000 users
XLARGE = more than 1000 users
 The setting affects the following:
– The size of the buffer pool.
– The maximum size of logical log files (from 200 MB up to 2 GB) before the
server stops automatically adding logical logs to improve performance
– The initial size (from 50 MB to 500MB) of the following created storage spaces,
which are created automatically during installation:
•
•
•
•
•
•
40
An extendable plogspace for the physical log
A dbspace for the logical log
Dbspaces for databases and tables
A temporary dbspace
An sbspace
A temporary sbspace
© 2013 IBM Corporation
AUTO_TUNE_SERVER_SIZE
 This parameter can be used even if you did not create a server during
installation, or if you change its value after you initialize the server
for the first time.
 In this case, the new value affects the size of only the following
properties:
– The size of the buffer pool, if the BUFFERPOOL configuration parameter
setting includes the memory='auto' option.
– The maximum size of all logical log files before the server stops automatically
adding logical logs to improve performance.
 Auto-instance-creation with the physical log, logical log and smart
large object spaces, and 2 tempspaces all in their own separate
dbspaces, along with the separately created rootdbs, will lead to less
down time, fewer configuration steps, and increased ‘out-of-the-box’
experiences for all users, from the very first initial install.
 At this point, all they have to do is create databases. But where?
41
© 2013 IBM Corporation
AUTOLOCATE Configuration Parameter
 This controls the dbspace location of user defined databases, tables,
and indexes and the automatic fragmentation of tables.
 If 0 (default) this is disabled:
– New databases still default created in rootdbs
– New table and indexes still default created in the same dbspace as the database
 Set from 1 to 32, enables automatic location and fragmentation:
– Indicates how many round-robin fragments to initially allocate to a table.
 Stores new databases, tables, and indexes in server defined optimal
dbspaces:
– By default, all dbspaces are available.
– Users can control the list of available dbspaces.
– Fragments new tables by round-robin, where the number of fragments is equal to
the value of the AUTOLOCATE configuration parameter.
– Adds more table fragments as the table grows.
– “In” dbspace_name clause overrides
42
© 2013 IBM Corporation
AUTOLOCATE Configuration Parameter
 If enabled, you can use the autolocate database arguments with the
admin() or task() function to
– Manage the list of dbspaces for automatic location and fragmentation:
• The list of available dbspaces is in the sysautolocate system catalog table.
– Disable automatic location and fragmentation for the specified database.
 You can use the AUTOLOCATE environment option of the SET
ENVIRONMENT statement of SQL to enable or disable the value of
the AUTOLOCATE configuration parameter for a session.
 Users can still override automatic server based allocation of location
by using the “in dbspace” clause of the create database, table and
index clauses
43
© 2013 IBM Corporation
AUTOLOCATE
 It is anticipated that this will lead to fewer outages caused by having
user defined databases and tables and indexes being stored in the
rootdbs and causing that space to fill up completely (and stop the
instance) or extend/expand unnecessarily beyond its original size.
– This has been a long standing user issue and seen in many places.
 This is a good feature.
 Stands alone as a feature. In combination with the implementation of
the storage pool it will be very difficult, if configured and
administered properly initially, to run out of space.
 Best practice. With the storage pool feature implemented.
44
© 2013 IBM Corporation
SQL Admin API / onmode –wf/wm
 Turn off autolocation for the stores database:
– execute function task(“autolocate database off”, “stores”);
 Turn on autolocation for the stores databases:
– execute function task(“autolocate database add”, “stores”, “datadbs1”);
– execute function task(“autolocate database add”, “stores”,
“datadbs1,datadbs2,datadbs3,datadbs4”);
– execute function task(“autolocate database anywhere”, “stores”);
 Remove datadbs2 from the list of available dbspaces for the stores
database:
– execute function task("autolocate database remove", “stores",
"datadbs1");
 onmode –wf/wm
45
© 2013 IBM Corporation
Other Server Improvements
 Improve performance with private memory caches
 Default mode for VP_MEMORY_CACHE_KB is STATIC
– To change it to DYNAMIC make changes to ONCONFIG
VP_MEMORY_CACHE_KB 800, DYNAMIC
 Best Practice Tip:
– If using DYNAMIC, memory caches grow and uses more memory
– Consider setting SHMTOTAL to a specific value instead of 0
 Complex Query processing improvements
– Use View folding mechanisms
– Avoid materialized views or large temporary tables
– More efficient when query processing through 3rd party BI tools
46
© 2014 IBM Corporation
INFORMIXCONTIME & INFORMIXCONRETRY
 INFORMIXCONTIME & INFORMIXCONRETRY
will now support remote servers
– INFORMIXCONTIME
• Indicates the number of seconds before reporting a
connection failure
– INFORMIXCONRETRY
• Indicates the number of connection attempts to
retry before reporting a connection failure
 Precedence
– Session level through set environment
– Client level through clients user environment
– Server level through configuration parameters
 Can also be set with onmode –wf/wm
47
47
© 2014 IBM Corporation
GLS Improvements
 With GLS 6.00xC4, specify the display widths that DB-Access and
other character-based Informix applications use for characters in the
Unicode Private Use Area (PUA) ranges:
– First, set the new IFX_PUA_DISPLAY_MAPPING environment variable
– Second, create a mapping file:
• $INFORMIXDIR/gls/etc/pua.map.
• List each character in the file followed by the character representation display width.
 Valid display widths in the file are:
– 1 (halfwidth character representation, default) or
– 2 (fullwidth character representation).
 If there is not a display width for a character in the file, the default is
halfwidth.
48
© 2014 IBM Corporation
Automatic removal of in-place alters
 Removing outstanding in-place alter operations improves
performance
 Remove outstanding in-place alter operations for tables or fragments
with:
– IBM OpenAdmin Tool (OAT) for Informix
– Update_ipa or fragment update_ipa argument of the admin() or task() SQL
administration command.
– Previously, a dummy UPDATE was done on tables to remove outstanding inplace alter operations.
 Speed up operations faster by including the parallel option with the
table update_ipa or fragment update_ipa argument of the admin() or
task() SQL administration command.
 A prerequisite for reverting to an earlier version of Informix.
 An example showing removal of pending in-place alter operations on
a table that is named auto in parallel:
EXECUTE FUNCTION task("table update_ipa parallel","auto");
49
© 2014 IBM Corporation
Storage Optimization Improvements
 Parallelized Storage Optimization
– Compress, Uncompress, Repack of table and index data in parallel
– Parallel operations reduces maintenance windows
– Examples:
EXECUTE FUNCTION task("table compress rows parallel","dental");
EXECUTE FUNCTION task("fragment uncompress rows
parallel","14680071");
EXECUTE FUNCTION task("table update_ipa parallel","auto");
EXECUTE FUNCTION task("index compress repack shrink parallel","ind5",
"customer", "jayson");
50
© 2015 IBM Corporation
Limit size of extendable storage spaces
 Use SQL Administration API functions to set the limits
 Arguments are space name / max size / new create size / new extend size
 Useful for multi-tenant Informix instances
EXECUTE FUNCTION task("modify space sp_sizes", "dbspace3", "60000",
"10000", "100000");
dbspace3 is modified with new chunk size of 60000 KB of space when
expanding the storage pool entry for a storage space, a new extend size
of 10000 KB and a maximum total size of 100000 KB.
EXECUTE FUNCTION task("modify space sp_sizes", "dbspace8", "20",
"1.5");
dbspace8 is modified with new chunk size of 20 % of storage space, an
extend size of 1.5 % of the storage space and a maximum size of
undefined.
51
© 2015 IBM Corporation
Questions
52
© 2013 IBM Corporation
Primary Storage Manager
© 2014 IBM Corporation
Primary Storage Manager (PSM)
Faster and more functional than ISM
ISM
PSM
46
2
25 MB
2 MB
Number of daemon programs
9
0
Startup Steps
40
2
Maximum Number of parallel
streams
4
Unlimited
Maximum transfer buffer size
64K
2GB
Integrated with Database Scheduler
No
Yes
Number of Program Files
Size of programs shipped with IDS
 Easy way to configure OnBar to take backups and restores
 Easy way for embedded solutions to take parallel backups
using OnBar
 Easy way for Ontape customers to migrate to OnBar
 Replaces ISM
54
© 2014 IBM Corporation
Primary Storage Manager
 A brief history of Informix backup utilities
– ontape
•
•
•
•
For years, the only backup/restore (B/R) available
Operated serially
Supported a single device (tape or disk) for backing up the instance and logical logs
In Informix 11, intelligent file naming for instance backups to disk to prevent
overwriting a previous backup

Fixed with the introduction of backups to directories and the IFX_ONTAPE_FILE_PREFIX
parameter
 ontape worked well when database sizes were small but as they
grew, Informix implemented a two-tier solution that provided more
options
– More sophisticated backup devices
• Disk, tape, jukeboxes
– The ability to backup multiple dbspaces in parallel
55
© 2014 IBM Corporation
Primary Storage Manager
 The first tier – ON-Bar
– Informix’s implementation of the XBSA API defined by the X/Open
organization
• An open interface between database servers and tape management systems to
support B/R operations

Contains a number of services and components to support B/R tasks including
communication, read/write tasks, metadata creation/searching etc.
– ON-Bar is the pipe that moves data to / from the instance and the backup
software
• Is used to connect to backup solutions like Tivoli Storage Manager
 The second tier – the Informix Storage Manager (ISM)
– A limited use and functionality backup solution included with the engine
– A licensed port of a (now very old) B/R product
– For customers who needed something between ontape and a full-fledged
tape management system
56
© 2014 IBM Corporation
Primary Storage Manager
 The ISM had a number of limits and disadvantages
– Only 4 “simple” backup devices (tape or disk) directly connected to the
physical server
– Difficult to configure and administer
– Code line was difficult to maintain since it was third-party and very, very old
– IBM had to pay a license fee to use this ancient, difficult to maintain code
 With the rapid decrease in disk-based storage, most small-to-mid
sized customers are now backing up to disk
– No longer need ISM with its problems
– Need a faster, parallelized solution to get instance data to disk
 Solution – The Primary Storage Manager (PSM)!
57
© 2014 IBM Corporation
Primary Storage Manager
 The PSM
– Replaces the ISM
– Has better integration with the ON-Bar API
– Is simple to configure
• Only 1 parameter at its most basic . . . . . . . . . . . . . well maybe 2
– Is simple to administer using the onpsm utility
– Provides a parallelizable backup/recovery stream to a disk device
– Provides many of the features of a more full-featured tape management
system
• Ability to define storage pools, including devices, sizes and priorities
• Expire and prune backups

Using the onsmsync utility
• Some export capabilities to perform a remote restore
• Object and meta data verification, locking and unlocking
• Supports optional backup compression and/or encryption

58
Through the BACKUP_FILTER parameter
© 2014 IBM Corporation
Primary Storage Manager
 Components of the PSM
– The XBSA library
• Single shared library, called dynamically by ON-Bar operations
• Default location
•
•
Linux/Unix/Mac: $INFORMIXDIR/lib/libbsapsm.<sharedlib suffix>
Windows: %INFORMIXDIR\bin\libbsapsm.dll
– The onpsm utility
• CLI-based front end tool to administer objects in the storage manager
– The PSM catalogs
• C-ISAM tables stored in $INFORMIXDIR/etc/psm
• Repository of information for all backups, devices, pools, etc.
59
© 2014 IBM Corporation
Primary Storage Manager
 Components of the PSM (cont)
– The ixbar.<servernum> file (aka ON-Bar emergency boot file)
• Contains information to execute a cold restore
•
What was backed up and when
– The onsmsync utility
• Used to synch information between the sysutils database, the ixbar file and the
PSM catalog repository
60
© 2014 IBM Corporation
Primary Storage Manager
 There is very little to configure for the PSM to work
– Required
• Create one (or more) directories where instance and logical backups will be created
•
•
Can have different locations for logical logs and instance backups
Fixed sizes can be assigned, as shown later, to prevent overflowing the file system
• Configure the PSM_DEVICE parameter with one fully pathed directory
•
•
Default location for instance and logical backups
Can be over-ridden with additional configuration
– Optional, though highly recommended
• Configure the BAR_BSALIB_PATH parameter to point to the XBSA library
Execute backups!!!
61
© 2014 IBM Corporation
Primary Storage Manager
 There is very little to configure for the PSM to work (cont.)
– It is possible to execute a backup without any configuration at all
– Backups would be created in $INFORMIXDIR/backups
• Probably not the best location
• Would want to specify one or more locations via onpsm operations
•
Full syntax discussed later
– From a minimalist perspective, there is one optional, though highly
recommended initial configuration setting
• Configure the BAR_BSALIB_PATH parameter to point to the XBSA library
62
© 2014 IBM Corporation
Primary Storage Manager
 Additional configuration parameters to consider
– PSM_ACT_LOG – log for PSM activities
• Not much is recorded here
– PSM_DEBUG_LOG – debug file location
– PSM_DEBUG – used to set the level of debug messages
• 0 (off) to 9 (high)
– PSM_CATALOG_PATH – location of the PSM catalog repository if not in the
default location of $INFORMIXDIR/etc/psm
– PSM_DBS_POOL – the name of the storage pool for instance backups
• Default name: DBSPOOL
• Unique names are strongly encouraged if multiple instances are active on a server
– PSM_LOG_POOL – the name of the storage pool for instance backups
• Default name: LOGPOOL
• Unique names are strongly encouraged if multiple instances are active on a server
63
© 2014 IBM Corporation
Primary Storage Manager
 Additional configuration parameters to consider (cont.)
– BAR_MAX_BACKUP – Controls the number of parallel data streams from the
instance to the PSM
• Regardless of this value, only one output file for each space is created

Illustrated later
– BAR_XFER_BUF_SIZE – Can be used to set a custom data transfer buffer
size
– LTAPEDEV – can not be set to /dev/null if backing up with ON-Bar
• Some restore operations, including specific moments in time, may require logical log
restore operations
64
© 2014 IBM Corporation
Primary Storage Manager
 Three interesting quirks to the PSM
– First, critical instance files are backed up as part of an instance backup
operation
• The ixbar.n, $ONCONFIG, sqlhosts, oncfg files
• Stored in a separate directory structure, shown later
– Second, the PSM organizes data differently
• As administrators, used to dealing with backup “levels”



0
1
2
• While the PSM supports these levels, the PSM organizes the backup output and
meta data by “generation” and “descendants”
Some administrative operations affect entire generations
> Such as export or expire

– Third, the PSM uses a single name for all output objects, consists of
• Instance name
• Descendant version
• Part number
65
Example: inst_2.2.1
© 2014 IBM Corporation
Primary Storage Manager
 In a newly created and populated instance, suppose these actions
occur:
– A L0 backup is created
• The PSM considers this the beginning of a new generation
– A L2 backup is created
• The PSM considers this a “first descendant”

Is stored in a different location than the previous L0 backup
– A L1 backup is created
• The PSM considers this as a “first descendant” as well

Is stored in the same location as the previous L2 backup
• The L2 descendant is NOT deleted

In order to support point-in-time restore
– A L2 backup is created
• The PSM considers this a “second descendant”, linked to the previous L1
Is stored in a different location than the L1 and earlier L2 backup
> It is a different descendant class than the earlier L2

66
© 2014 IBM Corporation
Primary Storage Manager
 Let’s see this in action with the inst_2 instance
– The DBSPOOL is configured to /opt/IBM/informix/backups
– First backup (L0) occurs at 5:13 pm
Directory automatically
created for the inst_2
instance objects
Directories created for
the instance spaces.
The “2” directory
contains logical log
backups. Labeled “2”
because it’s
DBSERVERNUM of
instance
67
© 2014 IBM Corporation
Primary Storage Manager
 Looking at the directory for the rootdbs
– A single directory for new backup generations is created
– Populated with the L0 backup of the space
For new generations
Backup of the
dbspace. Notice
the file name.
68
© 2014 IBM Corporation
Primary Storage Manager
 Look in more detail at what happens in the data_space_1 directory
Output created by L0
backup in the new
generation
subdirectory for this
space.
Notice the output file
name is identical to
that in the rootdbs
directory structure.
69
© 2014 IBM Corporation
Primary Storage Manager
 Look in more detail at what happens in the data_space_1 directory
(cont.)
– Data changes occur in the dbspace
– L2 backup executed at 5:22 pm
Output created is
created in the FIRST
descendant directory.
Notice the file name.
Nothing in the
SECOND descendant
directory
70
© 2014 IBM Corporation
Primary Storage Manager
 Look in more detail at what happens in the data_space_1 directory
(cont.)
– More data changes occur in the dbspace
– L1 backup executed at 5:26 pm
Additional output
created is created in
the FIRST descendant
directory. Notice the
file name. It’s the
second version of a
first descendant.
Still nothing in the
SECOND descendant
directory
71
© 2014 IBM Corporation
Primary Storage Manager
 Look in more detail at what happens in the data_space_1 directory
(cont.)
– Still more data changes occur in the dbspace
– L2 backup executed at 5:28 pm
Nothing new added to
the FIRST descendant
directory
Finally, output in the
SECOND descendant
directory. Notice the
file name.
72
© 2014 IBM Corporation
Primary Storage Manager
 Conceptually this makes sense
– To restore to very beginning:
• Only need the content of the new generation (aka 0) directories.
– To restore to the first L2 backup (and changes in the data_space_1 space):
• Need the new generation AND
• The first descendant with the L2 backup.
– To restore to the L1 backup
• Need the new generation AND
• Plus the second version of the first descendant with the L1 backup:
•
Supersedes the L2 backup.
– To restore to the last L2 backup
• Need the new generation,
• The second version of the first descendant with the L1 backup AND
• The second descendant with the L2 backup:
•
73
Because the L2 backup contains changes occurring after the L1 backup.
© 2014 IBM Corporation
Primary Storage Manager
 One last thing – logical log backups
– Are created under
the DBSERVERNUM
directory
– Each log has its
own directory
(named by log_id)
with the backup of
that log
74
© 2014 IBM Corporation
Questions
75
© 2014 IBM Corporation
Primary Storage Manager
 Let’s look at some of the other PSM components
– The
$INFORMIXDIR/etc/
ixbar.n file contains
a list of every
dbspace backed up
for every operation:
– n is the
DBSERVERNUM
76
© 2014 IBM Corporation
Primary Storage Manager
 Let’s look at some of the other PSM components (cont.)
– If ixbar doesn’t exist when the first ON-Bar / PSM backup is executed, it is
automatically created as part of the operation
No ixbar file
It is created and the
backup continues
77
© 2014 IBM Corporation
Primary Storage Manager
 Let’s look at some of the other PSM components (cont.)
– The PSM C-ISAM catalog tables are in $INFORMIXDIR/etc/psm
• This data is synchronized with the sysutils database
78
© 2014 IBM Corporation
Questions
79
© 2014 IBM Corporation
Primary Storage Manager
 The onpsm utility is used to manage backup devices and the files
created by backup operations
– Can add and delete “devices” or output locations
– Can check the consistency of catalog tables
• Can also delete the catalog tables
– Can unlock the catalog tables if a B/R operation terminates abnormally
– Can list or delete backup files created by backup operations
• Not recommended as the primary method of deleting backups as shown later
– Main syntax tree
onpsm
-C – check consistency of (or delete) the PSM catalog tables
-D – manage backup devices and pools
-L – manage locks on the PSM catalog tables
-O – list (or delete) backup files created by a backup operation
80
© 2014 IBM Corporation
Primary Storage Manager
 onpsm - device management -D
– Parameters for adding a device or backup location
add full_path
-p [ HIGH | LOW ] – the priority for the location
-g pool_name
-t [ FILE | TAPE ]
-s size – optional, maximum output size if using a tape
-b block_size – optional, the block size for the tape device
• Only one device in a pool can be classified with a HIGH priority

If multiple devices are configured with LOW, the actual device used for a backup operation
will be randomly selected
• For tape sizes, 0 (zero) can be used to indicate “unlimited” or write until the device
returns a “media full” error.

81
Other sizes can be entered using the B, K, M, G, T, or P suffix (for bytes, kilobytes,
megabytes, gigabytes, terabytes, or petabytes)
© 2014 IBM Corporation
Primary Storage Manager
 onpsm - device management -D
– Example, as shown earlier
82
© 2014 IBM Corporation
Primary Storage Manager
 onpsm - device management -D
– Use list to list configured devices
83
© 2014 IBM Corporation
Primary Storage Manager
 onpsm - device management -D
– Use list to list configured devices (cont.)
• With the –u (lowercase) file_name
parameter, can export the device
configuration to a named flat file
• With the –l (lowercase) file_name
parameter, an exported list of devices
can be imported into the PSM for use
84
© 2014 IBM Corporation
Primary Storage Manager
 onpsm - device management -D
– To delete a device from a pool
del device_path –g pool_name [ -y ]
– To modify device information
update device_path
-p [ HIGH | LOW ] – optional, if needed to change device prioritization
-s size – optional, if needed to change tape device size
With the B, K, M, G, T, or P suffix
85
© 2014 IBM Corporation
Primary Storage Manager
 onpsm – backup object management -O (“oh”, uppercase)
– Use the list parameter to
show objects backed up
86
© 2014 IBM Corporation
Primary Storage Manager
 onpsm – backup object management -O (“oh”, uppercase)
– One or more backup objects can be deleted from disk using the del parameter
• This does NOT refresh information in the ixbar file, it still thinks the object is there
– For example, in this list object 85 is the rootdbs for a third generation L0
backup
• Is also the last generation created
• Notice the file name – inst_2.3.1
87
© 2014 IBM Corporation
Primary Storage Manager
 onpsm – backup object management -O (“oh”, uppercase)
– Delete syntax
del –o object_id [ -y ]
lowercase “oh”
– New list shows the object is deleted
88
© 2014 IBM Corporation
Primary Storage Manager
 onpsm – backup object management -O (“oh”, uppercase)
– The file was deleted from disk as part of the operation
• No third generation output file
89
© 2014 IBM Corporation
Primary Storage Manager
 onpsm – backup object management -O (“oh”, uppercase)
– The ixbar file still contains a reference to the now missing backup object
90
© 2014 IBM Corporation
Primary Storage Manager
 onpsm – backup object management -O (“oh”, uppercase)
– Attempting to execute a cold restore fails without the rootdbs backup file
91
© 2014 IBM Corporation
Primary Storage Manager
 onpsm – backup object management -O (“oh”, uppercase)
– The only way, in this case, to successfully restore the system is to use pointin-time syntax to execute a second (or first) generation restore
– Take away from the onpsm –O del discussion
• Should only be used for objects that have already been deleted (or expired) from the
ixbar file but somehow didn’t get properly removed from disk
• Use the “expiration” functionality in onsmsync to properly remove “old” backups
92
© 2014 IBM Corporation
Primary Storage Manager
 The onsmsync utility is used to perform several operations
– Synchronize:
• The sysutils database
• The ixbar file
• The PSM catalog tables
– Expire old backups
• Will delete the backup files (on disk)
• Prune the ixbar and catalog / sysutils tables as needed
– Can recreate a missing or damaged ixbar file so a restore can occur
– Can “export” a backup so it can be used to restore on a different server
– Can “import” a backup so it can be used in a restore operation
93
© 2014 IBM Corporation
Administration Features - Storage Manager
Deduplication
 De-duplication as an alternative to incremental backups
– Eliminates redundant data in backups
– Less storage requirements
– Decrease load on network if storage devices are not on the same machine
where the backup is being taken
– Usable with storage managers
– Enabled with IFX_BAR_USE_DEDUP
 Can not use DEDUP and incremental backups at the same time.
 Support of Tivoli and other storage vendors
94
© 2014 IBM Corporation
Questions
95
© 2014 IBM Corporation
Multi-Tenancy in Informix 12
Agenda
 Multi-tenancy within Informix
Multi Tenancy – Defined
 Tenancy refers to the ability of an Informix server to support a set of
users in a client organization needing to access the same data and
system resources while denying non-administrative users on the
same server from accessing those same resources without
permission.
 Multi tenancy is multiple organizations supported on the same server
needing to access only their own segregated data and resources
while denying other users access to their data and resources.
 In tenancy, a dedicated database is created and assigned storage and
processing resources for that database based on defined servicelevel agreements with the client organization.
– It is possible to have services for multiple companies (tenants) that run efficiently
within a single Informix instance.
Multi-Tenancy in Informix
 Multiple tenant databases within a
single instance of Informix
– Assign resources (cpu / storage)
specific to tenant databases
– Segregate data specific to each tenant
– Informix instance will share certain
resources across all tenants (pio / lio
etc)
– Capability to associate VP’s specific to
tenant databases
A single instance of the
software runs on a server,
serving multiple clientorganizations (tenants).
With a multi-tenant architecture,
software is designed to virtually
partition its data and
configuration, and each client
organization works with a
customized virtual application.
Advantages of a Multi-tenant environment
 Shared costs
– License cost could be shared across all the tenants
– Several tenants are allocated only a portion of the software and hardware
resources
– Ideally suited for SAAS / PAAS environment
– Cost effective when compared with virtualization
– Ideal for service providers
 Secure and isolated data for each tenant
– Informix treats each tenant database as an independent unit from the
application perspective
– Storage spaces are specific to each tenant
 Typically database objects are the same in all tenant databases.
– Makes for easier implementation in a cloud environment
– Capability to alter schema (if required) just for tenant(s)
Multi-Tenant Architecture - Features
• 1 tenant = 1 database
• Segregated and dedicated storage for tenants
– Storage provisioning can be used to allocate dbspaces on demand for a multitenant system
– Dbspaces allocated as a small size, but set to auto-expand
– New dbspace maximum size can now be set to prevent expansion above a
certain size.
• Tenants could share CPU resources using CPU VPs or have user
defined VPs per tenant
– Tenants, however will share VP resources for non user defined VPs (aio, pio,
lio, net, etc)
• Capability to specify session limits on…
– Locks
• Manage tenants with a graphical view (OAT)
– A tenant can see only information relevant to a that specific tenant
– Storage information for that tenant
– CPU utilization for that tenant
Multi-Tenant Storage
 Storage provisioning can be used to allocate dbspaces on demand
for a multi-tenant system
 Dbspaces allocated as a small size, but set to auto-expand
 New dbspace maximum size can now be set to prevent expansion
above a certain size.
Multi-Tenancy
Creating a Tenant
execute function task(“create dbspace from storagepool”, “tenant1_1”, “10m”);
execute function task(“create dbspace from storagepool”, “tenant1_2”, “10m”);
execute function task(“create sbspace from storagepool”, “tenant1_3”, “10m”);
execute function task(“tenant create”, “tenant1”,
‘{ “DBSPACE”:”tenant1_1,tenant1_2”,
“SBSPACE”,”tenant1_3”,
“VPCLASS”:”VPCLASS tenant1,num=1”,
“SESSION_LIMIT_LOCKS”:”2000” }’);
Created in sysadmin:tenant table, see Appendix B for details.
Creating a Tenant
execute function task(“tenant create”, “tenant1”,
‘{
“DBSPACE”:”dbslist”,
“BLOBSPACE”:,”dbslist”,
“SBSPACE”:,”dbslist”,
“VPCLASS”:”vpclass definition”,
“DBSPACETEMP”:”dbslist”,
“SBSPACETEMP”:”sblist”,
“SESSION_LIMIT_LOCKS”:”value”,
“LOGMODE”:”db log mode definition”,
“CASE”:”case”,
“LOCALE”:”locale definition”,
“CATALOGS”:”dbslist”
}’);
Multi-Tenant – Limiting Database Creation
 DBCREATE_PERMISSION
– Specifies the users who can create databases (by default, any user can)
Example
DBCREATE_PERMISSION informix
Multi-Tenancy Management
 Use SQL Administration API command to manage tenant databases
EXECUTE FUNCTION task('tenant create', 'companyA',
'{dbspace:"companyA_dbs1, companyA_dbs2, companyA_dbs3",
sbspace:"companyA_sbs",
vpclass:"tvp_A,num=6",
dbspacetemp:"companyA_tdbs",
session_limit_locks:"1000", logmode:"ansi", locale:"fr_ca.8859-1"'});
EXECUTE FUNCTION task('tenant update', 'companyA',
'{dbspace:"companyA_dbs4, companyA_dbs5", sbspace:"companyA_sbs3",
vpclass:"tvp_B"}’);
The tenant database gains two dbspaces and a sbspace.
EXECUTE FUNCTION task('tenant drop', 'companyA');
Cannot just “drop database database_name ” a tenant database. You must ‘tenant
drop’ a database created as tenant.
Note: The options for the tenant database are treated as a JSON document.
Return to Agenda
The vpclass option for task/admin() tenant update (1)
 vpclass_name is limited to 8 characters.
 A maximum of 200 tenant virtual processor classes can be created.
 If the vpclass_name is unique,
– You create a new tenant virtual processor class.
 If the vpclass_name exists,
– The tenant database shares the class with other tenant databases.
 If the vpclass property has an existing value,
– You change the virtual processor class for the tenant database.
 If you do not include the num=vps property,
– one virtual processor is started.
The vpclass option for task/admin() tenant update (2)
 From a couple of slides earlier:
EXECUTE FUNCTION task('tenant update', 'companyA',
'{dbspace:"companyA_dbs4, companyA_dbs5", sbspace:"companyA_sbs3",
vpclass:"tvp_B"}’);
 This much is certain:
– The instance gains two dbspaces and a sbspace.
 From the previous slide:
– If the vpclass_name is unique,
• You create a new tenant virtual processor class.
– If the vpclass_name exists,
• The tenant database shares the class with other tenant databases.
– If the vpclass property has an existing value,
• You change the virtual processor class for the tenant database.
– If you do not include the num=vps property,
• One virtual processor is started.
Multi Tenancy - Session Level Resource Limitations
 Session Level Resource Control Limits
– To improve performance and restrict the tenant database size.
– Prevents any session owned by non-administrative users from using so many
resources that other sessions cannot continue processing transactions.
– To prevent performance issues.
– Be useful in embedded environments.
 Two ways to do this:
‒ Via a configuration parameter
‒ SQL Admin API task and admin function calls
• Tenant properties in the tenant definition created when you run the admin() or task()
SQL Admin API command with the tenant create or tenant update arguments.
– Tenant properties have precedence over related configuration parameters.
Multi Tenancy - Session Level Lock Limits (1)
 The SESSION_LIMIT_LOCKS configuration parameter specifies the
maximum number of non-Admin user locks available for a session.
–
–
–
–
Default is none
If not in onconfig file, 2147483647
Values 500 – 2147483647
Not dynamic
 Administrators can set SESSION_LIMIT_LOCKS to reduce the risk of
ordinary users in concurrent sessions depleting the lock resources of
the database server.
• EXECUTE FUNCTION task('tenant create', 'company_A',
{dbspace:"company_A_dbs1,
company_A_dbs2,company_A_dbs3",
•
sbspace:"company_A_sbs",
vpclass:"tvp_A,num=6",
•
dbspacetemp:"company_A_tdbs",
session_limit_locks:“2000",
•
session_limit_memory:"110MB",
session_limit_tempspace:“35MB",
•
session_limit_logspace:"35MB",
session_limit_txn_time:"180",
•
tenant_limit_space:“1TB",
tenant_limit_memory:"1GB",
•
tenant_limit_connections:"1000",
logmode:"ansi",
•
locale:“en_us.8859-1",
case:"insensitive",}‘ );
Multi Tenancy - Session Level Lock Limits (2)
 You can override the session_limit_locks property for a session by
setting the IFX_SESSION_LIMIT_LOCKS environment option to a lower
value than the session_limit_locks property value
– Cannot be less than 500 locks
– Limits do not apply to administrative users such as DBSA or Informix
– SET ENVIRONMENT IFX_SESSION_LIMIT_LOCKS '5000';
 If the IFX_SESSION_LIMIT_LOCKS session environment option and
the SESSION_LIMIT_LOCKS configuration parameter are set to
different values,
– The session environment option takes precedence over the configuration
parameter for operations during the current sessions by users who are not
administrator, but only until the current session ends
– If neither are set, the default limit for every user is 2,147,483,647 locks
 Tenancy setting have precedence over environment setting if tenancy
setting is greater than the environment setting
 To avoid user override, DBA’s can set in sysdbopen.
Multi Tenancy - Session Level Memory Limits
 Memory size threshold:
– Set the session_limit_memory to terminate sessions exceeding a specified
maximum amount of session shared memory for non-administrative users
•
•
•
•
Off (0) by default in $ONCONFIG and not dynamic
The value is in KB, MB or GB and between 20480 KB - 2147483648 KB
Limit does not apply to a user who holds informix or DBSA privileges
If the property is not set in the create tenant or update tenant statements for the task
or admin functions, the SESSION_LIMIT_MEMORY configuration parameter sets it.

If the configuration parameter is also not set, a session can only allocate up to 2147483648 KB.
• EXECUTE FUNCTION task('tenant create', 'company_A',
{dbspace:"company_A_dbs1,
company_A_dbs2,company_A_dbs3",
•
sbspace:"company_A_sbs",
vpclass:"tvp_A,num=6",
•
dbspacetemp:"company_A_tdbs",
session_limit_locks:“2000",
•
session_limit_memory:"110MB",
session_limit_tempspace:“35MB",
•
session_limit_logspace:"35MB",
session_limit_txn_time:"180",
•
tenant_limit_space:“1TB",
tenant_limit_memory:"1GB",
•
tenant_limit_connections:"1000",
logmode:"ansi",
•
locale:“en_us.8859-1",
case:"insensitive",}‘ );
Multi Tenancy - Session Level Temp Space Limits
 Set the session_limit_tempspace to the maximum amount of session
usable temporary storage for non-admin users, terminate a session
exceeding that limit, and write an error out to the message log file.
– Default is 0 (off) in $ONCONFIG and not dynamic
– Values are between 20480 KB – 2147483648 KB
 EXECUTE FUNCTION task('tenant create', 'company_A',
{dbspace:"company_A_dbs1, company_A_dbs2,company_A_dbs3",

sbspace:"company_A_sbs",
vpclass:"tvp_A,num=6",

dbspacetemp:"company_A_tdbs",
session_limit_locks:“2000",

session_limit_memory:"110MB",
session_limit_tempspace:“35MB“,

session_limit_logspace:"35MB",
session_limit_txn_time:"180",

tenant_limit_space:“1TB",
tenant_limit_memory:"1GB",

tenant_limit_connections:"1000",
logmode:"ansi",

locale:“en_us.8859-1",
case:"insensitive",}‘ );
Multi Tenancy – Session Transaction Log Space Limits
 Transaction log space size threshold:
–
–
–
–
–
Set the session_limit_logsize to the maximum size of a session transaction.
Non-Admin user transactions are rolled back if the log size threshold is reached.
Default value if 0 (off) and not dynamic
Range of values from 5120 KB – 2147483648 KB if set
Writes an error in the database server message log.
– EXECUTE FUNCTION task('tenant create', 'company_A',
{dbspace:"company_A_dbs1, company_A_dbs2,company_A_dbs3",
–
sbspace:"company_A_sbs",
vpclass:"tvp_A,num=6",
–
dbspacetemp:"company_A_tdbs",
session_limit_locks:“2000",
–
session_limit_memory:"110MB",
session_limit_tempspace:“35MB“,
–
session_limit_logspace:"35MB",
session_limit_txn_time:"180",
–
tenant_limit_space:“1TB",
tenant_limit_memory:"1GB",
–
tenant_limit_connections:"1000",
logmode:"ansi",
–
locale:“en_us.8859-1",
case:"insensitive",}‘ );
Multi Tenancy - Session Level Transaction Time Limits
 Sets the session_limit_txn_time to the maximum number of seconds
that a non-administrative transaction can run.
– Default 0 (off) and not dynamic
– Values are between 60 – 2147483647 seconds
– Prevents individual session transactions from monopolizing the logical log,
terminates a transaction over the time limit, writes an error in the message log.
– EXECUTE FUNCTION task('tenant create', 'company_A',
{dbspace:"company_A_dbs1, company_A_dbs2,company_A_dbs3",
–
sbspace:"company_A_sbs",
vpclass:"tvp_A,num=6",
–
dbspacetemp:"company_A_tdbs",
session_limit_locks:“2000",
–
session_limit_memory:"110MB",
session_limit_tempspace:“35MB“,
–
session_limit_logspace:"35MB",
session_limit_txn_time:"180",
–
tenant_limit_space:“1TB",
tenant_limit_memory:"1GB",
–
tenant_limit_connections:"1000",
logmode:"ansi",
–
locale:“en_us.8859-1",
case:"insensitive",}‘ );
Ceiling Caps on Tenant Permanent Storage
 You can limit the total amount of all permanent storage space types
for a tenant database by setting the tenant_limit_space property.
– Default 0 (off) and not dynamic
– Values between 1048576 KB and 1717986918400 KB
– When the limit is reached, subsequent operations that require additional disk
space are rejected.
– EXECUTE FUNCTION task('tenant create', 'company_A',
{dbspace:"company_A_dbs1, company_A_dbs2,company_A_dbs3",
–
sbspace:"company_A_sbs",
vpclass:"tvp_A,num=6",
–
dbspacetemp:"company_A_tdbs",
session_limit_locks:“2000",
–
session_limit_memory:"110MB",
session_limit_tempspace:“35MB“,
–
session_limit_logspace:"35MB",
session_limit_txn_time:"180",
–
tenant_limit_space:“1TB",
tenant_limit_memory:"1GB",
–
tenant_limit_connections:"1000",
logmode:"ansi",
–
locale:“en_us.8859-1",
case:"insensitive",}‘ );
Limit Shared Memory for Tenant Databases
 Limit shared memory amounts to all tenant database connected
sessions.
– If the limit is exceeded, the session using the most shared memory is terminated.
– Two ways to enable this:
• Set the TENANT_LIMIT_MEMORY configuration parameter



0 (off) default
Not dynamic
Range of values in KB, MB, GB, or TB from 102400 KB to 2147483648 KB
• Include the tenant_limit_memory parameter with the tenant create or tenant update
argument to the task or admin SQL administration API command.
• EXECUTE FUNCTION task('tenant create', 'company_A',
{dbspace:"company_A_dbs1,
company_A_dbs2,company_A_dbs3",
•
sbspace:"company_A_sbs",
vpclass:"tvp_A,num=6",
•
dbspacetemp:"company_A_tdbs",
session_limit_locks:“2000",
•
session_limit_memory:"110MB",
session_limit_tempspace:“35MB",
•
session_limit_logspace:"35MB",
session_limit_txn_time:"180",
•
tenant_limit_space:“1TB",
tenant_limit_memory:"1GB",
•
tenant_limit_connections:"1000",
logmode:"ansi",
•
locale:“en_us.8859-1",
case:"insensitive",}‘ );
Limit the Number of Client Connections to a Tenant
Database
 When the limit is reached, subsequent connection requests to the
tenant database are rejected.
 Two ways to enable this:
– Set the TENANT_LIMIT_CONNECTIONS configuration parameter
• 0 (Off) by default
• Not dynamic
• Maximum number of connections is 65536.
– Include the tenant_limit_connections parameter with the tenant create or
tenant update argument to the task or admin SQL Admin API commands:
• EXECUTE FUNCTION task('tenant create', 'company_A',
{dbspace:"company_A_dbs1,
company_A_dbs2,company_A_dbs3",
–
–
–
–
–
–
–
sbspace:"company_A_sbs",
dbspacetemp:"company_A_tdbs",
session_limit_memory:"110MB",
session_limit_logspace:"35MB",
tenant_limit_space:“1TB",
tenant_limit_connections:"1000",
locale:“en_us.8859-1",
vpclass:"tvp_A,num=6",
session_limit_locks:“2000",
session_limit_tempspace:“35MB",
session_limit_txn_time:"180",
tenant_limit_memory:"1GB",
logmode:"ansi",
case:"insensitive",}‘ );
Tenant Private Memory Caches on Virtual Processors
 You can now improve processing performance by creating private
memory caches for all tenant virtual processors.
 Set the VP_MEMORY_CACHE_KB configuration parameter to a
nonzero value to create private memory caches for all CPU and tenant
virtual processors.
 Default is 0 (off)
 Values are the total size in KB of all private memory caches, optionally
followed by a comma and the mode of the caches:
‒ 800 to 40% of the SHMTOTAL configuration parameter value.
Tenant Private Memory Caches on Virtual Processors
 You can now improve processing performance by creating private
memory caches for all tenant virtual processors.
 Set the VP_MEMORY_CACHE_KB configuration parameter to a
nonzero value to create private memory caches for all CPU and tenant
virtual processors.
 Default is 0 (off)
 Values are the total size in KB of all private memory caches, optionally
followed by a comma and the mode of the caches:
‒ 800 to 40% of the SHMTOTAL configuration parameter value.
VP_MEMORY_CACHE_KB (1)
 Modes:
– STATIC
• Default.
• The specified size is the maximum combined size of all private memory caches.
– DYNAMIC
• The specified size is the initial size of all private memory caches.
• The cache size changes dynamically but cannot exceed the value of the SHMTOTAL
configuration parameter.
 Takes Effect
– After you edit your onconfig file and restart the database server.
– When you reset the value dynamically in your configuration file via the onmode
-wf command.
– When you reset the value in memory via the onmode -wm command.
VP_MEMORY_CACHE_KB (2)
 Private memory caches can improve performance for Informix server
memory allocated by threads but have no impact on memory allocated
to and used by buffer pools or shared memory communication.
 If > 0, a private memory cache is created for each CPU and Tenant VP.
– Size of all private memory caches combined is default limited to the specified
number of KB.
 Each private memory cache size increases and decreases dynamically
– Include a comma and the word DYNAMIC after the size:
• VP_MEMORY_CACHE_KB 1000,DYNAMIC.
• Initial size total of VP private caches cannot exceed 40% of SHMTOTAL in dynamic
mode
• Cache sizes can expand beyond the initial limit.
– The total size of the caches cannot exceed the value of the SHMTOTAL
configuration parameter.
 If you reset the VP_MEMORY_CACHE_KB configuration parameter to
0, the memory caches are emptied and disabled.
VP_MEMORY_CACHE_KB (3)
 Dynamic memory caches on busy systems can grow quickly and use
a large amount of available memory.
 If you set the mode to DYNAMIC, set the SHMTOTAL configuration
parameter to a specific limit instead of the default value of 0, which
does not limit the amount of memory.
– To be safe, no more than 80% of the total physical memory on your database
machine.
• Please do not forget this.
• You do have to work for a living.
– Make sure you take into account memory needs of all apps running on your
database machine before setting this.
• We call Informix Dynamic Server for a reason.
• Work with the system administration people to be certain; you may not “see”
everything running on your machine.
Other Server Improvements
 Limit size of extendable storage spaces
– Use SQL Administration API functions to set the limits
– Arguments are space name / max size / new create size / new extend size
– Useful for multi-tenant Informix instances
EXECUTE FUNCTION task("modify space sp_sizes", "dbspace3", "60000",
"10000", "100000");
dbspace3 is modified with new chunk size of 60000 KB of space when
expanding the storage pool entry for a storage space, a new extend size
of 10000 KB and a maximum total size of 100000 KB.
EXECUTE FUNCTION task("modify space sp_sizes", "dbspace8", "20",
"1.5");
dbspace8 is modified with new chunk size of 20 % of storage space, an
extend size of 1.5 % of the storage space and a maximum size of
undefined.
OAT Enhancements
 Monitor the tenant databases of a multi-tenant Informix instance
– Tenant Information can be found in Space Administration
Questions
REST
© 2014 IBM Corporation
REST Definition
129
© 2015 IBM Corporation
REST Architecture
 Distributed communication architecture
 Widely popular in cloud environments
 REST
–
–
–
–
–
An architectural style for web based communication
Permits clients to communicate with servers in a unique manner
Represents resources (databases in this case) as URI’s
Architecture uses HTTP protocol
A set of operations (GET/POST/PUT/DELETE) permit manipulation of
resources
 RESTful architectures are stateless
– Server does not maintain any client context between transactions
– Transaction must contain all information necessary to satisfy the particular
request.
– Makes RESTful architectures more reliable and also helps to expand their
scalability.
The strength of REST
REST is an architectural style, not a protocol or an implementation. REST has some core principles, but
in the end, it's an abstraction, not a specific implementation.
(Source: http://www.ibm.com/developerworks/library/os-understand-rest-ruby/)
130
© 2015 IBM Corporation
Access Informix from REST API Clients
 Directly connect applications or devices that communicate through
the REST API to Informix
– No client side drivers needed, freedom from client dependency
– Web based applications can connect seamlessly to the database using HTTP
protocol
– Create connections by configuring the wire listener for the REST API
– Use MongoDB and SQL queries against JSON and BSON document
collections, traditional relational tables, and time series data
– The REST API uses MongoDB syntax and returns JSON documents
– Widely popular in Cloud / IOT architectures
– Simplify web application development in Cloud environments
 A subset of the HTTP protocol (GET / POST / DELETE / PUT)
supported
–
–
–
–
131
POST method maps to mongo db insert or create command
GET method maps to mongo db query command
PUT method maps to mongo db update command
DELETE method maps to mongo db delete command
© 2015 IBM Corporation
Access Informix from REST API Clients (contd)
132
© 2015 IBM Corporation
Wire Listener & REST (1)
 The wire listener is a mid-tier gateway server that enables
communication between MongoDB applications and the Informix®
database server.
 The wire listener is provided as an executable JAR file that is named
$INFORMIXDIR/bin/jsonListener.jar. The JAR file provides access to
the MongoDB API and REST API.
 You can connect to a JSON collection by using the REST API.
 When a client is connected to the wire listener by using the REST API,
each database is registered; session events such as create or drop a
database.
 If a REST request refers to a database that exists but is not registered,
the database is registered and a redirect to the root of the database is
returned.
133
© 2015 IBM Corporation
Wire Listener & REST (2)
 The JSONListener.properties file has an optional parameter called
listener.type It specifies the type of wire listener to start:
– The default is mongo which connects the wire listener to the MongoDB API
•listener.type=mongo
 To connect to a REST API, connect to the wire listener, connect the
wire listener to the REST API using the following parameter value
which must be specified to use the REST API:
– listener.type=rest
 There are some new REST related optional parameters for the
JSONListener.properties file which may be necessary for use.
134
© 2015 IBM Corporation
Multiple wire listeners configuration (1)
 You can run multiple wire listeners at the same time to access both
Mongo and REST data, by creating a properties file for each:
– Create each properties file in the $INFORMIXDIR/etc directory using the
$INFORMIXDIR/etc/jsonListener-example.properties file as a template.
– Customize each properties file and assign a unique name:
•The url parameter must be specified, either in each individual properties file or in
the file that is referenced by the include parameter.
Optional:
Specify the include parameter to reference another properties file. The path can
be relative or absolute.
•If you have multiple properties files, you can avoid duplicating parameter settings in
the multiple properties files by specifying a subset of shared parameters in a single
properties file, and the unique parameters in the individual properties files.
– Start the wire listeners.
135
© 2015 IBM Corporation
Multiple wire listeners configuration (2) - Example
 The same url, authentication.enable, and security.sql.passthrough
parameters are used to run two separate wire listeners:
 Create a properties file named shared.properties that includes the
following parameters
– :url=jdbc:informix-sqli://localhost:9090/sysmaster:
INFORMIXSERVER=lo_informix1210; authentication.enable=true
security.sql.passthrough=true
 Create a properties file for use with the MongoDB API that is named
mongo.properties, with the parameter setting
include=shared.properties included:
– include=shared.properties listener.type=mongo listener.port=27017
 Create a properties file for use with the REST API that is named
rest.properties, with the parameter setting include=shared.properties
included:
– include=shared.properties listener.type=rest listener.port=8080
136
© 2015 IBM Corporation
Multiple wire listeners configuration (3) - Example
 Start the wire listeners by using the command line:
– java -jar jsonListener.jar -start -config json.properties -config
rest.properties
137
© 2015 IBM Corporation
HTTP: POST
 The POST method maps to the MongoDB insert or create command.
Method Path
Description
POST
/
Create a database
POST
/databaseName
Create a collection
databaseName – database name
POST
/databasename/collectionName
Create a document
databaseName – database name
collectionName – collection name
138
© 2015 IBM Corporation
HTTP: POST – Create a database
 With the locale specified.
 Request: Specify the POST method:
– POST / Data:
 Specify database name mydb and an English UTF-8 locale:
– {name:"mydb",locale:"en_us.utf8"}
 Response: The following response indicates that the operation was
successful:
– Response does not contain any data.
139
© 2015 IBM Corporation
HTTP: POST – Collection Creation
 Creates a collection in the mydb database.
 Request: Specify the POST method and the database name as mydb:
– POST /mydb
 Data: Specify the collection name as bar:
– {name:“bar”}
 Response: The following response indicates that the operation was
successful:
– {"msg":"created collection mydb.bar","ok":true}
140
© 2015 IBM Corporation
HTTP: POST – Relational Table Creation
 This example creates a relational table in an existing database.
 Request: Specify the POST method and stores_mydb as the database:
– POST /stores_mydb
 Data: Specify the table attributes:
– { name: "rel", columns: [{name:"id",type:"int",primaryKey:true,},
{name:"name",type:"varchar(255)"}, {name:"age",type:"int",notNull:false}]}
 Response: The following response indicates that the operation was
successful:
– {msg: "created collection stores_mydb.rel" ok: true}
141
© 2015 IBM Corporation
HTTP: POST – Insert a Single Document
 Inserts a document into an existing collection.
 Request: Specify the POST method, mydb database, and people
collection:
– POST /mydb/people
 Data: Specify John Doe age 31:
– {firstName:"John",lastName:"Doe",age:31}
 Response: Because the _id field was not included in the document,
the automatically generated _id is included in the response. Here is a
successful response:
– {"id":{"$oid":"537cf433559aeb93c9ab66cd"},"ok":true}
142
© 2015 IBM Corporation
HTTP: POST – Insert Multiple Documents
 This example inserts multiple documents into a collection.
 Request: Specify the POST method, mydb database, and people
collection:
– POST /mydb/people
 Data: Specify John Doe age 31 and Jane Doe age 31:
– [{firstName:"John",lastName:"Doe",age:31},
{firstName:"Jane",lastName:"Doe",age:31}]
 Response: Here is a successful response:
– {ok: true}
143
© 2015 IBM Corporation
HTTP: GET
 The GET method maps to the MongoDB query command.
Method Path
Description
GET
/
List all databases
GET
/databaseName
List all collections in a database
databaseName – database name
GET
/databasename/collectionName?
queryParameters
Query a collection
databaseName – database name
collectionName – collection name
queryParameters - The query
parameters.
The supported Informix
queryParameters are:
batchSize, query, fields, and sort.
These map to the equivalent
MongoDB batchSize, query, fields,
and sort parameters.
144
© 2015 IBM Corporation
HTTP: GET – List All Databases on the Server
 Specify the GET method and forward slash (/):
– GET /
 Data: None.
 Response: Here is a successful response:
– [ "mydb" , "test“]
145
© 2015 IBM Corporation
HTTP: GET – List All Collections in a Database
 Request: Specify the GET method and mydb database:
– GET /mydb
 Data: None.
 Response: Here is a successful response:
– ["bar"]
146
© 2015 IBM Corporation
HTTP: GET – Show Sorted Data from a Collection
 This example sorts the query results in ascending order by age.
 Request: Specify the GET method, mydb database, people collection,
and query with the sort parameter.
– The sort parameter specifies ascending order (age:1), and filters id (_id:0) and
last name (lastName:0) from the response
– GET /mydb/people?sort={age:1}&fields={_id:0,lastName:0}
 Data: None.
 Response: The first names are displayed in ascending order with the
_id and lastName filtered from the response:
– [{"firstName":"Sherry","age":31}, {"firstName":"John","age":31},
{"firstName":"Bob","age":47}, {"firstName":"Larry","age":49}]
147
© 2015 IBM Corporation
HTTP: PUT
 The PUT method maps to the MongoDB update command.
Method Path
Description
PUT
Update a document
databaseName – database name
collectionName – collection name
queryParameters - The supported
Informix queryParameters are
query, upsert, and multiupdate.
/databasename/collectionName
?queryParameters
These map to the equivalent
MongoDB query, insert, and multi
query parameters, respectively.-
148
© 2015 IBM Corporation
HTTP: PUT – Document Update in a Collection
 Update the value for Larry in an existing collection, from age 49 to 25:
– [{"_id":{"$oid":"536d20f1559a60e677d7ed1b"},"firstName":"Larry"
,"lastName":"Doe","age":49},{"_id":{"$oid":"536d20f1559a60e677d7ed1c"}
,"firstName":"Bob","lastName":"Doe","age":47}]
 Request: Specify the PUT method and query the name Larry:
– PUT /?query={name:"Larry"}
 Data: Specify the MongoDB $set operator with age 25:
– {"$set":{age:25}}
 Response: Here is a successful response:
– {"n":1,"ok":true}
149
© 2015 IBM Corporation
HTTP: DELETE
 The DELETE method maps to the MongoDB delete command.
Method
Path
Description
DELETE
/
Delete all databases
DELETE
/databaseName
Delete a database
databaseName – database name
DELETE
/databasename/collectionName
Delete a collection
databaseName – database name
collectionName – collection name
DELETE
/databasename/collectionName
?queryParameter
Delete a document
databaseName – database name
collectionName – collection name
queryParameter - The query
parameter.
The supported Informix
queryParameter is query.
This maps to the equivalent
MongoDB query parameter.
150
© 2015 IBM Corporation
HTTP: DELETE (1) – Database Deletion
 Delete a database called mydb.
 Request: Specify the DELETE method and the mydb database:
– DELETE /mydb
 Data: None.
 Response: Here is a successful response:
– {msg: "dropped database"ns: "mydb"ok: true}
151
© 2015 IBM Corporation
HTTP: DELETE (2) – Collection deletion
 This example deletes a collection from a database.
 Request: Specify the DELETE method, mydb database, and bar
collection:
– DELETE /mydb/bar
 Data: None.
 Response: Here is a successful response:
– {"msg":"dropped collection""ns":"mydb.bar""ok":true}
152
© 2015 IBM Corporation
Auto Compression
© 2014 IBM Corporation
Text/Byte Compression
 Compression of in-partition data stored in Partition Blobs.
– Partition Blobs are simple large objects (mostly text and byte data types)
in which the data is stored outside the row, but in same partition of the
same dbspace as the row data.
• So even though the blob data is stored on a separate page from the row data,
the partition blob page still belongs to the same partition.
15
© 2014 IBM Corporation
Auto Compression - Overview
 Create a compression dictionary for the data rows of a table
automatically when it has the required numbers of rows.
 Benefits of automatic compression:
– Informix compresses data rows as the data is loaded.
– Informix sets compression as a property of the table, so any new fragments
added also get compressed automatically.
– You use current SQL admin API commands to set a table or fragment for auto
compression.
– You have an SQL interface to create a compressed table.
15
© 2014 IBM Corporation
Auto Compression Details
 New data will be loaded as compressed.
 A minimum of 2000 rows are needed for the compression dictionary
to be created.
 You can start compression with the same SQL admin API commands
that you currently use.
 You can also compress a table, using SQL syntax when you create
the table.
15
© 2014 IBM Corporation
Auto Compression How To

15
Using Admin API
–
–
Execute function task(“table compress”, “my_table”, “my_database”);
Execute function task(“fragment compress”, “my_fragid”);
–
Even if there are not enough rows to sample and create a compression
dictionary, the SQL admin commands will succeed indicating “Auto
compression is set”.
© 2014 IBM Corporation
Auto Compression - How to
 Using SQL
– Create table my_table on (my_col ….) compressed;
– After the table is created, as data is loaded into the table, a compression
dictionary will be created when 2000 rows are inserted.
15
© 2014 IBM Corporation
Auto Compression
 When a table load uses light append, a dictionary will be
automatically created:
– The rows that loaded before the dictionary was created will be compressed.
 During a normal insert, the new rows inserted will be compressed:
– Data rows already in the table (before the compress) will not be compressed.
 Auto compression is not supported for indexes and blobs.
15
© 2014 IBM Corporation
Questions
16
© 2014 IBM Corporation
Index Compression
© 2014 IBM Corporation
16
© 2014 IBM Corporation
16
© 2014 IBM Corporation
Index Compression Restrictions





Normal detached BTree indexes only
Native types
No functional indexes
Only the leaf level is compressed
Can be done as part of index creation or to existing indexes while
online
 Must contain at least 2000 unique key values
 Full HDR support
– Log the uncompressed key
 Uncompress not available at this time for indexes
16
© 2014 IBM Corporation
SQL Syntax
CREATE INDEX idxname
ON tabname (column_name1, …)
COMPRESSED
EXTENT SIZE # NEXT SIZE #;
EXECUTE FUNCTION sysadmin:task (‘index compress’, ‘index_name’,
‘database’);
16
© 2014 IBM Corporation
Index Compression Benefits
 More index data fits into the bufferpool
– Reduced I/O
 Unlike Oracle, SQL Server, and even DB2, Informix index
compression supports composite indexes with either “prefix” and
“suffix” keys
– Prefix – unique value is last value of composite
• Ex) Red, Honda, Civic, Oregon, VIN #
– Suffix – unique value is first value of composite (key only scan)
• Ex) VIN #, Red, Honda, Civic, Oregon
16
© 2014 IBM Corporation
Indexing Varchars vs Indexing Compressed Chars
 Varchars are used to save space, in both the data row and the index,
by storing only the relevant data and not padding with white space
 We did an experiment to compare key-only scan performance of
indexed varchars against indexed compressed chars
16
© 2014 IBM Corporation
Varchars vs Compressed Chars
create table people (
first_name
varchar(64),
middle_name
varchar(64),
last_name
varchar(64),
address
varchar(255),
city
varchar(64),
state
char(2),
sex
varchar(6),
ssn
char(11),
eye_color
varchar(24),
ancestry
varchar(64));
16
© 2014 IBM Corporation
Varchars vs Compressed Chars
create unique index ssn_people on people(ssn);
create index ix1_people on
people(last_name, first_name, middle_name, ssn);
Key only scan
select count(*)
from people
where last_name in ('SMITH', 'JOHNSON', 'WILLIAMS', 'BROWN', 'JONES') AND
first_name in ('JAMES', 'JOHN', 'ROBERT', 'MARY', 'PATRICIA', 'LINDA');
16
© 2014 IBM Corporation
Query Results (Several Million Rows)
Varchar
Uncompressed
Char
% better
Compressed Char
% better
Unbuffered
100s
5.38s
1758%
1.9s
183%
Buffered
6.4s
0.77s
731%
0.77s
0%
# of index nodes
514748
2857165
17
634095
© 2014 IBM Corporation
Why Is Performance Better?
 Varchars require IDS to read the data row even if it appears that a key
only scan could be performed
– Have to compare white space
• White space stored in row
• White space removed from index key
17
© 2014 IBM Corporation
Varchar query plan
17
© 2014 IBM Corporation
Char query plan
17
© 2014 IBM Corporation
Varchar index (oncheck –pT)
17
© 2014 IBM Corporation
Char index (oncheck –pT)
17
© 2014 IBM Corporation
Char index compressed (oncheck –pT)
17
© 2014 IBM Corporation
Results
 So to look at the earlier slide again:
 In both cases, the queries were faster on the compressed character
column. Results were not so good on the uncompressed varchar
column, due to the byte by byte comparison earlier mentioned.
 Index usage: there were almost 120,000 more index nodes on the
character (634095) than there were on the varchar (514748).
 Here a compressed character column index, loses almost 78% of index
nodes (634095 vs uncompressed char 2857165); and it is faster
performance wise.
17
© 2014 IBM Corporation
Index Compression (1)
 Compress an existing index:
– execute function task(“index compress”, “j”, “testdb”);
 Repack and shrink an existing index:
– execute function task(“index repack shrink”, “j”, “testdb”);
 Estimate Compression on an index:
– execute function task(“index estimate_compression”,“176_576”,
“stores_demo”, “ifxjson”);
– You can estimate compression only for a detached B-tree index on a
fragmented or non-fragmented table.
17
© 2014 IBM Corporation
Index Compression (2)
 Pushes all nodes forward on the same page as they are found and
compresses them, leaving free space at the end.
 Repack consolidates partially filled index pages and fills them,
leaving empty pages behind; repack if you intend to shrink.
 Shrink reduces an index overall size by returning empty pages to its
dbspace; we will not shrink beyond the index first extent size.
17
© 2014 IBM Corporation
Some index repack and shrink results, before and after.
 An Index before compression, repack and shrink, 20480 pages,
oncheck –Pt output.
Before:
After:





18
After: 2778 Total
Pages Allocated,
86.43 % reduction
15.57 % of index
Space remains.
© 2014 IBM Corporation
Questions
18
© 2014 IBM Corporation
New SQL
© 2014 IBM Corporation
Minus, Plus & Intersect Support - Functionality
 New Extensions to the existing UNION/UNION ALL SET operation.
 Same rules of UNION also applies e.g.
 Both query block should have exact same number of columns.
 Comparable data types.
 Projection clause will not have BYTE or TEXT.
18
© 2014 IBM Corporation
Minus, Plus & Intersect Support - Functionality
 Order by should be at the end.
 Precedence will be from left to right, unless they are grouped using
parentheses.
 There are existing restrictions for UNION and they will apply to these
operators too.
18
© 2014 IBM Corporation
Intersect and Minus/Except
 Intersect returns only those rows which are returned by both queries
and results will be distinct.
 Minus/Except will returns distinct rows from the left query which are
not found in the right side query.
 The All keyword is not available for the intersect and minus/except
operations.
 NULL friendly, means when comparing NULL to NULL they are
considered equal.
18
© 2014 IBM Corporation
Minus, Plus & Intersect Support - Examples











create table t1 (col1 int);
insert into t1 values (1);
insert into t1 values (2);
insert into t1 values (2);
insert into t1 values (2);
insert into t1 values (3);
insert into t1 values (4);
insert into t1 values (4);
insert into t1 values (NULL);
insert into t1 values (NULL);
insert into t1 values (NULL);






create table t2 (col1 int);
insert into t2 values (1);
insert into t2 values (3);
insert into t2 values (4);
insert into t2 values (4);
insert into t2 values (NULL);
18
© 2014 IBM Corporation
Minus, Plus & Intersect Support - Examples (cont’d)
 Select * from t1 intersect select * from t2;
– Returns NULL, 1,3 and 4
 Select * from t1 minus/except select * from t2;
– Returns 2
18
© 2014 IBM Corporation
Minus, Plus & Intersect Support - Rewrite
 Select col1, col2 from t1 intersect select col1, col2 from t2;
 Rewrite:
 Select distinct *
 from (select col1, col2 from t1) t11
 where exist (select col1, col2 from (select * from t2) t22 where
t11.col1 = t22.col1 and t11.col2 = t22.col2 );
18
© 2014 IBM Corporation
Minus, Plus & Intersect Support - Minus/Except Rewrite
 select col1, col2 from t1 minus select col1, col2 from t2;
 select distinct *
from (select col1, col2 from t1) t11
where not exists
(select *
from (select col1, col2 from t2) t22
where t11.col1 = t22.col1 and t11.col2
and t11.col2= t22.col2);
18
© 2014 IBM Corporation
Select ….. INTO TABLE
 You can now create persistent tables in IDS that can be accessed
across sessions using the result set of a SELECT statement.
 Combines the functionality of CREATE TABLE and INSERT into a
single statement.
 Supports all the Storage options of CREATE TABLE on IDS.
 You can now change the EXTENT SIZE and NEXT SIZE of an
existing table without having to unload the table first.
 Allows you to create new schema from existing tables and
populate it with only a subset of data for testing purposes.
 Supported in a Flexible Grid environment.
 Functionality supported via two different syntax enhancements:
– SELECT col1, col2 FROM sourcetab INTO newtab;
– CREATE TABLE newtab AS SELECT col1, col2 FROM sourcetab;
19
© 2014 IBM Corporation
4GB Stored Procedure Language
 The maximum length of SQL statements and SPL routines is now
4 GB
– The actual limit may depend on the available memory on the system.
 Valid for Client SDK 3.70.xC5, JDBC 3.70.xC5 or higher.
 Transient extended types, collection types are limited to 65535
 sysadmin task command length is limited to 2020 (slot table has
28 bytes required).
 Perhaps writing SQL or SPL possibly so large should be avoided
for programmer maintenance issues …..
19
© 2014 IBM Corporation
CASE Statement in SPL
 CASE statement without else clause
CASE mod(r,314)
WHEN 1 THEN select tabid into r from informix.systables where tabid = 1;
WHEN 313 THEN LET r = r + 4;
WHEN 3 THEN LET r = r + 3;
END CASE;
 CASE statement with else clause
CASE r
WHEN 317 THEN select tabid into r from informix.systables where tabid = 1;
WHEN 312 THEN LET r = r + 4;
WHEN 3 THEN LET r = r + 3;
ELSE
LET r = 1;
END CASE;
19
© 2014 IBM Corporation
New Mathematical Functions – Oracle Compatibility
 SIGN(num)
– Returns the sign of the input numeric value (returns integer):
• -1 for n < 0, 0 for n = 0, and 1 for n > 0
 COSH(num)
– Returns hyperbolic cosine (returns float)
 SINH(num)
– Returns hyperbolic sine (returns float)
 TANH(num)
– Returns hyperbolic tangent (returns float)
 ACOSH(num)
– Returns arc hyperbolic cosine (returns float)
 ASINH(num)
– Returns arc hyperbolic sine (returns float)
 ATANH(num)
– Returns arc hyperbolic tangent (returns float)
 LN(num)
– Alias for existing LOGN(num) function (returns float)
19
© 2014 IBM Corporation
Multiple distinct Operators
 Example
– Legal syntax
• Applied to each predicate
select avg(unique col1), count (distinct col2)
from my_tab;
– Illegal syntax
• Nested operators
select distinct avg(distinct col1) from my_tab;
19
© 2014 IBM Corporation
SQL Enhancements – nulls first / last
 Example
Default null
sort order
with the asc
keyword
19
Modified
null sort
order with
the asc
nulls last
keywords
© 2014 IBM Corporation
SQL Enhancements – nulls first / last
 Example
Default null
sort order
with the
desc
keyword
19
Modified
null sort
order with
the desc
nulls first
keywords
© 2014 IBM Corporation
Virtual Table Interface – in Clause
 The in clause can be used with the
virtual table interface.
 VTI tables are commonly used with
Timeseries.
– select …. where meter_id in (5,10,30)
 Previously had to rewrite this type of
a query using a series of or’s in the
where clause.
– select ….. where meter_id = 5 or
meter_id = 10 ….
19
© 2014 IBM Corporation
Application Development
LATERAL Correlated Subquery of Derived Tables Description
 Error Example (-999)
select *
from t1,
( select t2.a as t2_a from t2 where t2.a = t1.a)
 LATERAL Example
select *
from t1,
LATERAL ( select t2.a as t2_a from t2 where t2.a = t1.a)
19
© 2014 IBM Corporation
App. Development - LATERAL Correlated Subquery of
Derived Tables - Why
 Compatibility with OData
– OData = Open Data Protocol
– Web protocol for data reads and writes
 Compatibility with other DBMS
–
–
–
–
19
DB2
SQL Server
Oracle
PostgreSQL
© 2014 IBM Corporation
Administration Features - SERIAL In-Place ALTER
 In-place ALTER for SERIAL types
–
–
–
–
Increased table availability
Data converted as needed
Less space required
Reduces logging
 Supported conversion types
– Small-to-large or large-to-large
•
•
•
•
serial to serial8
serial to bigserial
serial8 to bigserial
bigserial to serial8
– Not large-to-small
20
© 2014 IBM Corporation
NOVALIDATE
 Data Migration enhancement
– Eliminate time consuming constraint checking on foreign key constraints
– ALTER TABLE catalog ADD CONSTRAINT
(FOREIGN KEY (stock_num, manu_code) REFERENCES stock
ON DELETE CASCADE CONSTRAINT ab NOVALIDATE);
– SET CONSTRAINTS cons_child_x1 ENABLED NOVALIDATE;
 When migrating data with data loads and unloads, time can be
saved by not having to perform the foreign key constraint
checking.
 Since the data is coming from a known state, these can be
skipped.
201
20
© 2014 IBM Corporation
New SQL - Quarter and Limit
 Both Informix and IWA support the new SQL functions QUARTER
and LIMIT SQL syntax.
 Enhanced support, performance and integration with BI tools and
time-based queries retrieving the quarter of a date or a datetime
 Enhanced DB portability, integration and performance when using BI
tools and applications that use this specific LIMIT syntax for
top/bottom N queries and preview data/results
– Similar results to the usage of SELECT FIRST or LAST
– Database compatibility with other DB products
20
© 2014 IBM Corporation
New SQL - Quarter and Limit - Examples
 Quarter
SELECT QUARTER("orders"."ship_date") AS "qr_ship_date_ok",
SUM("orders"."sales") AS "sum_sales_qk“
FROM "informix"."orders" "orders"
INNER JOIN "informix"."returns" "returns" ON
("orders"."order_id" = "returns"."order_id")
INNER JOIN "informix"."users" "users" ON
("orders"."region" = "users"."region")
GROUP BY 1
 Limit
SELECT "order"."o_custid" AS "o_custid",
"order"."o_date" AS "o_date",
"order"."o_orderid" AS "o_orderid",
"order"."o_priority" AS “o_priority",
"order"."o_shipping" AS "o_shipping",
"order"."o_status" AS "o_status"
FROM "dwa"."order" "order"
LIMIT 10000
20
© 2014 IBM Corporation
SQL Enhancements – Windows XML Enhancement
 The XML4C and XSLT4C libraries have been ported to Windows 64
enabling XML functionality on that port.
20
© 2014 IBM Corporation
Questions
20
© 2014 IBM Corporation
Informix Warehouse Accelerator
© 2014 IBM Corporation
Informix Warehouse Accelerator - Cluster Environment
 IWA now has the ability to create an accelerator server across
multiple physical servers as a cluster
– An MPP implementation
– Enables the creation of significantly larger accelerators and marts
• No longer limited by the amount of memory a single server can support
Informix Warehouse Accelerator
SQL
Results
20
© 2014 IBM Corporation
Informix Warehouse Accelerator
SQL Queries (from apps)
TCP/IP
Informix
SQL
Query Router
Results
(via DRDA)
Data Warehouse
Informix:
 Routes SQL queries to accelerator
 User need not change SQL or apps.
 Can always run query in Informix,
e.g., if
– too short an est. execution time
20
Informix Warehouse Accelerator
Query
Processor
Compressed
DB partition
Bulk Loader
Informix Warehouse Accelerator:
 Connects to Informix via TCP/IP & DRDA
 Analyzes, compresses, and loads
 Copy of (portion of) warehouse
 Processes routed SQL query and
returns answer to Informix
© 2014 IBM Corporation
Informix Warehouse Accelerator
 Partition Update: Example
20
© 2014 IBM Corporation
Refreshing Data on IWA – Multiple Options
Methods
21
Frequency of Deployment
Considerations
Complete Load
Nightly or a small number
of times per day
For initial load and for periodic
complete refresh of mart
Automatic or
manual Partition
Update
Hourly, nightly or ondemand
For partitioned FACT table
For non-partitioned or
partitioned dimension tables
Trickle Feed
Continuous
For Inserts (Append) to FACT
table and I/U/D for dimension
tables
Complete reload periodically
following Trickle Feed
© 2014 IBM Corporation
Informix Warehouse Accelerator
 Partition Update: Refresh data in a data mart at partition level
– Provides efficient refresh of data mart data
– Enables fast refresh of fact tables which are usually very large
• Dimension table (usually very small) full load refresh is very fast
– Refresh can be done on
• The entire table or
• The partition (or subset) that contains data updates
21
© 2014 IBM Corporation
Informix Warehouse Accelerator – Trickle Feed
 The source database can now be configured to push new data into
the marts every N seconds without any further administrative
intervention
– Inserts only, deleted / modified fact data is not pushed into the mart
• See next slide for more details
– Is controlled by the ifx_setupTrickleFeed() function
• Available in the sysadmin database
– In an H/A cluster, this function can only be executed on the cluster primary
 Syntax
ifx_setupTrickleFeed(‘accel_name’, ‘mart_name’, buffer_time )
– Where
accel_name – the name of the accelerator which supports the mart
mart_name – the name of the data mart
buffer_time – the number of seconds between refreshes
1 – 8639999 (2,400 hours) – refresh both the fact and dimension tables
-1 – -8639999 (2,400 hours) – refresh just the fact table
 To use this feature, MAX_PDQPRIORITY in the source instance
must be set >= 50 for the initial data load and all subsequent loads
21
© 2014 IBM Corporation
Informix Warehouse Accelerator
 New SQL Support for query acceleration
– Several SQL functions and expressions now support query acceleration.
– Example:
• COUNT (DISTINCT CASE …)
• Multiple DISTINCT with aggregates, such as COUNT (DISTINCT…), in a query
• LEN / LENGTH
• SUBSTR / SUBSTRING
• TODAY
• TRIM
• YEAR
• UNITS
• UNION, UNION ALL
• RANK, DENSE_RANK, DENSERANK, CUME_DIST, PERCENT_RANK, NTILE,
ROW_NUMBER, ROWNUMBER, FIRST, LAST, RATIO_TO_REPORT,
RATIOTOREPORT
21
© 2014 IBM Corporation
Informix Warehouse Accelerator
 Additional platform support with Solaris Intel/AMD
̶
Informix database server
Accelerator supported on
Linux Intel/AMD 64-bit.
Optimizer
SQL query
Result set
Use
Accelerator
?
Yes
Informix Warehouse
Accelerator
Client
TCP/IP
No
̶
̶
Informix Server for the
accelerator supported on
the 64-bit Aix, HP IA, Linux
Intel x86, Solaris Sparc and
Solaris x86.
Adds more cross platform
integration.
Linux Intel x86 64-bit
AIX 64-bit
HP IA 64-bit
Solaris SPARC 64-bit
Linux Intel x86 64-bit
Solaris x86 64-bit (new support)
Data Warehouse or mixed OLTP/Warehouse system
21
© 2014 IBM Corporation
Informix Warehouse Accelerator
 Using the accelerator with secondary servers: Example
21
© 2014 IBM Corporation
You can use IWA’s In-Memory Analytics to Speed Up
queries on…
NoSQL Data
SQL Data
to local or remote tables
ex: JSON collections
from local Informix tables
External Tables
Views
Sensor data
Synonyms
ASCII/binary files in file
system or network pipes
21
to Informix tables or
NoSQL data
TimeSeries data,
time-stamped meter data
© 2014 IBM Corporation
Informix Warehouse Accelerator
 Views & Synonyms
– IWA can now use views when defining a
datamart
– Create a view of multiple tables
– Local or remote by using a synonym
pointing to a remote Informix table
– Views could map to a subset of another
Informix table or also to NoSQL data
– Use views to accelerate self-joins
Warehouse Accelerator
 Benefits
– Ability to combine and accelerate
queries on a local server with remote
tables, no need to make all tables local
– Allows to accelerate data in JSON
collections by using views
21
Data Marts
© 2014 IBM Corporation
IWA Data Mart Views - NoSQL query support & usage (1)
Example: Accelerating NoSQL data (in a JSON collection) using IWA’s view support
 From MongoDB shell:
– Create two collections (JSON): comments and users
$ mongo demo_database
MongoDB shell version: 2.4.9
connecting to: demo_database
mongos> db.comments.insert( [
{ uid:12345, pid:444, comment:"first" },
{ uid:12345, pid:888, comment:"second" },
{ uid:99999, pid:444, comment:"third" }
] )
mongos> db.users.insert( [
{ uid:12345, name:"john" },
{ uid:99999, name:"mia" }
] )
mongos> exit
 From Informix:
– Create a view on each JSON collection (comments,
$ dbaccess
users) demo_database > create view vcomments(uid,pid,comment) as
select
bson_value_int(data,'uid'),
bson_value_int(data,'pid'),
bson_value_varchar(data,'comment')
from comments;
> create view vusers(uid,name) as select
bson_value_int(data,'uid'),
bson_value_varchar(data,'name')
from users;
– Deploy an IWA data mart by probing a join between them:
set environment use_dwa 'probe cleanup';
set environment use_dwa 'probe start';
select {+ avoid_execute} * from vcomments c,
vusers u where c.uid=u.uid;
set
environment
use_dwa
'probe stop';
– Run
the accelerated
query.
execute procedure
ifx_probe2mart('demo_database','noSQL_mart');
execute function
ifx_createmart('demo_dwa','noSQL_mart');
execute function
ifx_loadmart('demo_dwa','noSQL_mart','NONE');
21
© 2014 IBM Corporation
IWA Data Mart Views - NoSQL query support & usage (2)
Example: Accelerating NoSQL data (in a JSON collection) using IWA’s view support
 From Informix:
– Run the accelerated query on the JSON
collection data:
set environment use_dwa 'accelerate on';
select c.uid,name,comment
from vcomments c, vusers u
where c.uid=u.uid and pid=444;
21
uid
name
comment
12345
john
first
uid
name
comment
99999
mia
third
© 2014 IBM Corporation
IWA data mart view support to accelerate SQL-NoSQL
query (1) – SQL/NoSQL joins
 From Informix:
 Create a view for the NoSQL data:
– We have an Informix table sqldoc1:
create table sqldoc1
( name varchar(10), value integer );
insert into sqldoc1 values ("John", 1);
insert into sqldoc1 values ("Scott", 2);
– We also have a JSON collection
doc1: In Informix using its JSON
compatibility or its MongoDB driver:
create table doc1
( c1
serial, data BSON );
Insert into doc1(c1,data) values
(0, '{ fname:"John", lname:"Miller", age:21,
address: { street:"Informix ave" } }'::JSON
);
Insert into doc1(c1,data) values
(0, '{ fname:"Scott", lname:"Lashley",
age:21.50, address: { street:"Blazer ave" },
job:"Ref" }'::JSON );
22
create view viewdoc1 (c1,fname,lname,age,street)
as
SELECT c1, bson_value_lvarchar(data,"fname"),
bson_value_lvarchar(data,"lname"),
bson_value_double(data,"age"),
bson_value_lvarchar(data,"address.street")
FROM doc1;
> select * from viewdoc1;
c1
1
fname
John
lname
Miller
age
21.00000000000
street Informix ave
c1 – Run2 the accelerated query on the JSON collection data:
fname
Scott
lname
Lashley
age
21.50000000000
street Blazer ave
2 row(s) retrieved.
© 2014 IBM Corporation
IWA data mart view support to accelerate SQL- NoSQL
query (2) – Query Acceleration
 Probe the query we want to
accelerate
– Use workload analysis to find datamart for a query
joining SQL table with NoSQL data (JSON collection)
select a.value,a.name,b.* from sqldoc1 a,
viewdoc1 b where a.value=b.c1
...
– Deploy datamart proposed:
<mart name="dm_sqlnosql1">
<table name="sqldoc1" schema="root" isFactTable="true">
<column name="name"/>
<column name="value"/>
</table>
<table name="viewdoc1" schema="root" isFactTable="false">
<column name="age"/>
<column name="c1"/>
<column name="fname"/>
<column name="lname"/>
<column name="street"/>
</table>
<reference
referenceType="LEFTOUTER"
isRuntimeJoin="true"
parentCardinality="n"
dependentCardinality="n"
dependentTableSchema="root"
dependentTableName="sqldoc1"
parentTableSchema="root"
parentTableName="viewdoc1">
<parentColumn name="c1"/>
<dependentColumn name="value"/>
</reference>
</mart>
</dwa:martModel>
22
 Run the query with acceleration.
set environment use_dwa '3';
Environment set.
select a.value,a.name,b.* from sqldoc1 a, viewdoc1 b
where a.value=b.c1 ;
value
name
c1
fname
lname
age
street
1
John
1
John
Miller
21.00000000000
Informix ave
value
name
c1
fname
lname
age
street
2
Scott
2
Scott
Lashley
21.50000000000
Blazer ave
Online.log:
01:22:11 SQDWA: select a.value,a.name,b.* from sqldoc1 a,
viewdoc1 b where a.value=b.c
01:22:11 SQDWA: Identified 1 candidate AQTs for matching
01:22:11 SQDWA: matched: aqt46221ac7-8eae-4b9a-9275bd00dcca357c
01:22:11 SQDWA: matching successful (17 msec)
aqt46221ac7-8eae-4b9a-9275-bd00dcca357c
01:22:11 SQDWA: offloading successful (3036 msec)
© 2014 IBM Corporation
Informix Warehouse Accelerator
 Views do not maintain information
about number of “rows” in the
system catalogs.
 If the Fact table is a view and the
Server is not able to identify it as a
fact table
{+ FACT ... /probe … /build}
 Force the optimizer directive using
the FACT directive.
Warehouse Accelerator
 If the view is chosen improperly as
the probe or build; you can use the
/probe and /build options of the joinmethod optimizer directives.
22
Data Marts
© 2014 IBM Corporation
New OLAP Functions
 Ranking functions
– RANK, DENSE_RANK, DENSERANK
– PERCENT_RANK, CUME_DIST, NTILE
 Numbering functions
– ROW_NUMBER, ROWNUMBER
 Aggregate functions
– SUM, COUNT, AVG, MIN, MAX
– STDEV, VARIANCE, RANGE
 FIRST_VALUE/LAST_VALUE functions
– FIRST_VALUE, LAST_VALUE
 LEAD/LAG functions
– LEAD, LAG
 RATIO_TO_REPORT function
– RATIO_TO_REPORT, RATIOTOREPORT
 All of these are accelerated by IWA
22
© 2014 IBM Corporation
Example - Ranking functions
 Order by clause is required
 Window clause is not allowed
 Duplicate value handling is different between rank() and dense_rank()
– rank() - assigns the next relative ordinal to the next result set value
– dense_rank() - assigns the next absolute ordinal to the next result set value
(alias is denserank())
22
© 2014 IBM Corporation
Rank Example
select emp_num, sales,
rank() over (order by sales) as rnk,
dense_rank() over (order by sales) as dense_rnk
from sales;
22
emp_num
sales
rnk
dense_rnk
101
102
103
104
105
106
2,000
2,400
2,400
2,500
2,500
2,650
1
2
2
4
4
6
1
2
2
3
3
4
© 2014 IBM Corporation
Top-N Rank
 Use a derived table to obtain top-n rank
select *
from (select name, salary
rank() over (order by salary desc) as rnk
from employee)
where rnk <= 5;
22
© 2014 IBM Corporation
Rank in Sports Stats – All Time Baseball Home Runs
Rank
Player
Home Runs
11
Albert Pujols
581
12
Harmon Killebrew
573
13
Rafael Palmeiro
569
14
Reggie Jackson
563
15
Manny Ramirez
555
16
Mike Schmidt
548
17
Mickey Mantle
536
18
Jimmie Foxx
534
19
David Ortiz
528
20
Willie McCovey
521
Frank Thomas
521
Ted Williams
521
Ernie Banks
512
Eddie Mathews
512
….
23
22
 Current Home Run Hitters list
All Time in Major League
Baseball (partial list)
– Recently, the player David Ortiz
moved past Willie McCovey and
others into 19th place all time.
– If he hits his 535th home run, he
will rise into 18th place alone,
possibly this year
• He has announced retirement,
end of this year.
• Everyone on this partial list, other
than Pujols and Ortiz is retired or
deceased.
– The current occupant of 18st
place, Jimmie Foxx, will then be
in 19th place.
© 2014 IBM Corporation
Rank in Sports Stats – All Time Women’s Soccer Goals
Rank
22
Name
Goals
1
Abby Wambach
184
2
Christine Sinclair
164
3
Mia Hamm
158
4
Kristine Lilly
130
5
Birgiz Prinz
128
6
Julie Fleeting
116
7
Patricia Panico
110
8
Elisabetta Vignotte
107
9
Sun Wen
106
10
Carolina Morace
105
10
Michelle Akers
105
12
Han Duan
101
12
Portia Modice
101
14
Marta
100
14
Tiffeny Milbrett
100
 Sinclair, Panico and
Marta are the only
active players on this
list.
© 2014 IBM Corporation
Questions
22
© 2014 IBM Corporation
Grid Queries
© 2014 IBM Corporation
SQL Enhancements – Grid Queries
 In a Flexible Grid environment, it is possible for a cluster of
instances to contain the same dbspaces and tables but no
replicated data
– Common in retail and or distribution environments where stores / warehouses
have different SKUs (products) and inventory levels
 Currently, to gather data from multiple instances requires executing
multiple SQL operations into a temporary table or joining the results
 Informix 12.10 introduces the grid query operation which, when
executed from an administrative node, will gather data from the
requested nodes with a single SQL statement
23
© 2014 IBM Corporation
SQL Enhancements – Grid Queries
 Grid queries can be executed against the entire cluster or against a
subset of the instances
– These subsets are defined as “regions” but have nothing to do with geography
 The cdr define region command is used to define regions
– Syntax
cdr define region [--grid | -g ]=grid_name
region_name [list_of_instances]
region_name must be unique across all clusters
• Will be used by SQL statements and cdr commands so have to be able to resolve
to the correct region
list_of_instances is a whitespace separated list of server group names
23
© 2014 IBM Corporation
SQL Enhancements – Grid Queries
 Example
cdr define region –g=my_grid region_1
g_inst_1 g_inst_2;
cdr define region -–grid=my_grid region_2
g_inst_3 g_inst_4;
– Instances can be members of more than one region if needed
– Regions can contain a subset of instances of another region
• The great_west region can contain g_inst_1, g_inst_2, g_inst_3, g_inst_4
• The pac_coast region can contain g_inst_1, g_inst_3
23
© 2014 IBM Corporation
SQL Enhancements – Grid Queries
 Regions can not be modified, have to drop and recreate
– Use the cdr delete region command
cdr delete region region_1;
23
© 2014 IBM Corporation
SQL Enhancements – Grid Queries
 After defining any regions that are needed, next step to executing
grid queries is to verify the consistency of the tables across the grid
– Ensures the table(s) schemas are identical across the grid
• All the columns and data types match so a consistent response to the query can
occur
– Prevents table alters from occurring unless executed through a grid operation
• Connect to the grid, execute alter, disconnect from the grid
23
© 2014 IBM Corporation
SQL Enhancements – Grid Queries
 The cdr change gridtable command is used to identify tables for grid
query operations aka “gridtables”
– Syntax
cdr change gridtable [--grid | -g]=grid_name
[--database | -D]=database_name
[--add | -a | --delete | -d]
[--all | -A | list_of_tables ]
Where
[--add | -a | --delete | -d] – the following table(s) can be added or removed
from the gridtable list
[--all | -A | list_of_tables ] – either all or list of tables (white space separated)
23
© 2014 IBM Corporation
SQL Enhancements – Grid Queries
 Examples
– Notice long and short versions of the commands are used
Add all ds2 database tables
cdr change gridtable –g=my_grid -D=ds2 -a –all;
Delete the customer table from the gridtables list
cdr change gridtable –g=my_grid -D=ds2 –d customer;
Add the my_tab_1 and my_tab_2 tables to the gridtables list
cdr change gridtable –-grid=my_grid --database=ds2 --add
my_tab_1 my_tab_2
23
© 2014 IBM Corporation
SQL Enhancements – Grid Queries
 With any regions defined, and tables added to the gridtable list, can
now execute grid queries
 Restrictions
– Only query operations are supported
– Queries must “simple”, can not contain subqueries, joins, unions or the
intersect, minus, or except qualifiers
• The grid query itself *can* be a subquery nested inside another query though
– Queries can only be executed against “normal” tables, not against views,
synonyms or external tables
• Exception is sysmaster tables which are allowed
– Queries can only be executed against data types that are supported by
distributed operations
• Excludes TimeSeries and any extended types that can’t be used in distributed
operations
23
© 2014 IBM Corporation
SQL Enhancements – Grid Queries
 New syntax to the select operation – the grid clause
– Syntax at the statement level
select column_names
from table_names grid [all] [‘region_name’ | ‘grid_name’]
where . . . . .
– The optional all keyword determines whether or not all matching rows are
returned ( a union all qualifier) from all instances or just the unique values (a
basic union qualifier)
• Default is union or unique values
23
© 2014 IBM Corporation
SQL Enhancements – Grid Queries
 New syntax to the select operation – at the session level
– Rather than modifying existing statements, the scope and union qualifier can
be set at the session level
• With the set environment select_grid operation
set environment select_grid [‘grid_name’ | ‘region_name’]
Use the default union operator on the grid queries
set environment select_grid_all [‘grid_name’ | ‘region_name’]
Use the union all operator on the grid queries
24
© 2014 IBM Corporation
SQL Enhancements – Grid Queries
 New syntax to the select operation – at the session level (cont.)
– Once set, all queries will execute as grid queries
– Only one default setting can be active at a time
• To change from one setting to the other re-execute the set environment command
– These settings can be changed for any individual statement(s) by including a
grid clause within the statement
24
© 2014 IBM Corporation
SQL Enhancements – Grid Queries
 To turn off grid queries
– Use the option that matches the command enabling grid queries
set environment select_grid default;
set environment select_grid_all default;
– Individual statement(s) can still execute as a grid query by including the grid
clause in the statement(s)
24
© 2014 IBM Corporation
SQL Enhancements – Grid Queries
 But what happens if a node with the grid / region is not available
when the grid query executes?
– It depends, can either
• Abort the query and return an error
• Execute the query with the available nodes

Can query later to see how many and which nodes didn’t participate
 Set at a session level with the set environment ifx_node_skip
operation
24
© 2014 IBM Corporation
SQL Enhancements – Grid Queries
 Syntax
set environment ifx_node_skip [ default | off | on ]
– Where
default | off – the default setting. Query is aborted and an error returned
on – rows are returned from the nodes available at the time the query was executed
 When the environment is set to on, any skipped nodes are captured
and can be returned to the calling session
– Important – this information is gathered on a statement-by-statement basis
and only lasts until the next grid query is executed in that session
24
© 2014 IBM Corporation
SQL Enhancements – Grid Queries
 To determine how many nodes, if any, were skipped and their
names
execute function ifx_gridquery_skipped_nodes_count()
Returns an integer value with the number of skipped nodes
execute function ifx_gridquery_skipped_nodes()
Returns an lvarchar with the name of one of the skipped nodes
• To get all the node names, you will need to execute this the same number of times
as returned by ifx_gridquery_skipped_nodes_count()


24
All names aren’t returned at once to facilitate easier parsing of the result set
The function operates like a fetch against a cursor
© 2014 IBM Corporation
SQL Enhancements – Grid Queries
 Example
set environment select_grid my_region;
set environment ifx_node_skip on;
select . . . . from . . . . .; {results returned}
execute function ifx_gridquery_skipped_nodes_count();
3
execute function ifx_gridquery_skipped_nodes();
g_inst_2
execute function ifx_gridquery_skipped_nodes();
g_inst_19
execute function ifx_gridquery_skipped_nodes();
g_inst_38
24
© 2014 IBM Corporation
SQL Enhancements – Grid Queries
 In the event a schema change needs to be made
– The change must be made through a grid operation as explained earlier
– A metadata flag will be set indicating an alter operation is in-flight
• Prevents any grid queries from executing
– As tables are being altered and acknowledged as completed, a cdr remaster
gridtable operation is automatically executed to re-verify consistency across
the cluster
– When the cdr remaster gridtable operation returns 0 (zero), the metadata
flag is removed and grid queries can be resumed
– The cdr remaster gridtable operation can be invoked by an administrator to
check the status of the alter operation
• Can know when to turn on grid query applications
24
© 2014 IBM Corporation
Shard data across Enterprise Replication servers
 Horizontally partition (shard) data
across multiple database servers.
 Distribute performance horizontally
across hardware
 Fast scalability with the cloud and
adding more database server nodes
Insert
East Coast
West Coast
 Create a shard cluster using hash
based sharding or expression based
sharding
cdr define shardCollection collection_1
db_1:joe.clients) --type=delete --key=state
--strategy=expression –versionCol=version g_shard_server_A "IN
('WA','OR','ID')" g_shard_server_B "IN
('CA','NV','UT','AZ')" g_shard_server_C "IN
('TX','OK','NM','AR','LA')"
g_shard_server_D REMAINDER
248
24
Midwest
Southern Region
© 2014 IBM Corporation
Connection Manager SLA’s aware of Secondary backlog
 More control of the Connection Manager’s
(CM) SLA policy.
 Scenarios:
– If a secondary is a specified number of logical log
pages behind the primary, the SLA can be
instructed not to use it
Connection Manager
?
• Until it has caught up
– When a secondary is taken down for hours for
maintenance on the primary.
– When the secondary is brought back they are out
of sync with the primary.
Primary
Secondary
 Modify the CM configuration file to specify
how many logical log pages behind a
secondary can be to still be considered
SLA mySLA DBSERVERS=(HDR,RSS,SDS)
POLICY=SECAPPLYBACKLOG:500
24
© 2014 IBM Corporation
Improved memory pool allocation for ER
 Aids in efficient ER transaction throughput
 Better memory pool allocation handling for ER
 New ONCONFIG parameter CDR_MEM
– 0 - The traditional method of memory allocation
• Use this setting when resource allocation is more important than performance
– 1 - Prevents multiple threads from simultaneously accessing a memory pool
• In large-scale ER environments significant performance improvements occur,
because memory allocation is done by multiple threads that work in parallel
– 2 - Improves performance at the cost of increased memory usage
• Memory allocation requests are increased to the closest fixed-block size, so that
free memory blocks can be found faster.
• Memory pools are not associated with specific CPU virtual processors, so memory
can be freed directly to the memory pool.
250
© 2015 IBM Corporation
Informix Extensibility - Spatial
© 2014 IBM Corporation
Geographic Transform
 The previous spatial data type
transformation behavior was as follows:
– GPS Data is WGS-1984, most countries use
a different EOGS that is more accurate for
their part of the world (USA tends to use
NAD-1983 in their State-Plane PROJCS)
 The previous Spatial Data Type can only
transform a geometry value from one
spatial reference system to another
spatial reference system if both spatial
reference systems share the same
Geographic Coordinate System
(GEOGCS)
GEOGCS
PROJCS1
25
PROJCS2
© 2014 IBM Corporation
Geographic Transform
 Transform between two spatial reference systems
– The two spatial systems do not have to use the same GEOGCS
– Will transform between two GEOGCS’s.
GEOGCS1
PROJCS1
GEOGTRANS
GEOGCS2
PROJCS2
 ST_Transform function
25
© 2014 IBM Corporation
ST_Transform()
 Transforms in the following cases between:
– Two UNKNOWN coordinate systems
• The srtext column in the spatial_references table for both SRIDs contains
UNKNOWN)
– A projected coordinate system and an un-projected coordinate system
– Two projected coordinate systems
– Two coordinate systems that have different false origins or system units
 The GEOGCS of the source and target spatial reference systems do
not need to be the same.
 A spatial reference system in one GEOGCS can be transformed into a
spatial reference system in a different GEOGCS if the transform is
supported by the ESRI libraries.
25
© 2014 IBM Corporation
Great Circle Calculations
 Support of SQL/MM/OGS features linear distance
with GEOGCS based spatial references
 Great Circle calculations work with angular
coordinates (ie latitude/longitude) in a GEOGCS and
calculates the linear distance across the ellipsoid.
 Example GEOGCS:
– GEOGCS["GCS_WGS_1984",DATUM["D_WGS_1984",
SPHEROID["WGS_1984",6378137,
298.257223563]],PRIMEM["Greenwich",0],UNIT["Degre
e",0.0174532925199433]]
 For customers using the previous Geodetic product
in their applications, this provides a way to reuse
great circle calculations as part of the IDS provided
Spatial feature.
25
© 2014 IBM Corporation
Geospatial General Enhancements
 The false origin and scale values for the system are derived from the
Projected Bounds of the underlying Coordinate System.
 For example, EPSG 2193 (NZGD 2000 New Zealand Transverse
Mercator) derives the falsex, falsey and xyunits values based on the
extent of the projection system:
– falsex = -6020520.00000
– falsey = -1997720.00000
– xyunits = 375371289.4930
 The z and m coordinates for all predefined spatial reference systems
have the following values in the following columns:
–
–
–
–
25
falsez = -50000.0
zunits = 1000.0
falsem = -1000.0
munits = 1000.0
© 2014 IBM Corporation
Geospatial General Enhancements
 SDE 10.2 libraries (ESRI)
 Added 1000+ predefined Spatial Reference Systems:
Authority
Authority ID’s (SRID)
Description
EPSG
2000 - 2999
Projection systems defined by the EPSG
EPSG
46008
GEOGCS NAD 1927
EPSG
4759
GEOGCS NAD 1983_NSRS2007
EPSG
4322
GEOGCS WGS 1972
EPSG
4326
GEOGCS WGS 1984
EPSG
4760
GEOGCS WGS 1966
EPSG
32601 - 23760
WGS 1984 UTM Zones
ESRI
54001 - 54004
Various world projection systems
54008 - 54019
54021 - 54032
54034
54042 - 54046
54048 - 54053
25
© 2014 IBM Corporation
spatial_references table
 The spatial_references table stores data about each map projection
that you use to store the spherical geometry of the Earth, for example,
your data might use the Mercator projection.
 The table contains a unique spatial reference ID (SRID) for each spatial
reference system.
 All spatial reference systems that you use in your database must have
a record in this table. All geometries in a spatial column must use the
same spatial reference system.
 The Informix spatial functions use the parameters of a spatial
reference system to translate and scale each floating point coordinate
of the geometry into 54-bit positive integers before storage.
– When retrieved, the coordinates are restored to their external floating point
format.
 Appendix C has a schema of the table.
25
© 2014 IBM Corporation
Spatial Reference - Creation samples
 Insert a spatial reference system into the spatial_references table:
– INSERT INTO spatial_references (srid, description, auth_name, auth_srid,
falsex, falsey, xyunits, falsez, zunits, falsem, munits, srtext)
– VALUES (1, NULL, NULL, NULL, 0, 0, 100, 0, 1, 0, 1, 'UNKNOWN');
• In this example, the spatial reference system has an SRID value of 1, a false X, Y of
(0,0), and its system units are 100.
• The Z coordinate and measure offsets are 0, while the Z coordinate and measure
units are 1.
 Create a spatial reference system with the SE_CreateSRID() function
– For the hazardous sites and sensitive areas example, the coordinates are in a
local countywide XY coordinate system.
– The X and Y coordinates range from 0 - 250 000.
– Create a new spatial reference system with the SRID of 5:
– EXECUTE FUNCTION SE_CreateSRID(0, 0, 250000, 250000, "Springfield
county XY coord system");
– (expression)
– 5
25
© 2014 IBM Corporation
Links, Helpful Information and Resources
 Informix CTP wiki
 Informix Developer Edition free download:
– http://www-01.ibm.com/software/data/informix/downloads.html









26
Informix Virtual Appliance
MongoDB site
MongoDB download: http://www.mongodb.org/downloads
MongoDB drivers
John Miller’s Blog
John Miller’s NoSQL Chat With The Labs session
Keshava Murthy’s Blog
Informix NoSQL Whitepaper
ibmnosql.com
© 2014 IBM Corporation
Questions
26
© 2014 IBM Corporation
Questions
26
© 2014 IBM Corporation
Appendix A – New Event Alarms
© 2014 IBM Corporation
Appendix A – New Event Alarms
Class Internal Name
Id
Event Description
ID
24
24013 Auto Tune daemon has insufficient
resources to perform tuning
operations.
ALRMU_85_AUTOTUNE_D
AEMON_FAIL
Online log: Performance Advisory
User action: Restart the server with more resources
Class Internal Name
Id
Event Description
ID
24
24014 Auto tuning failed to start a CPU VP
ALRMU_85_AUTOTUNE_A
DD_CPUVP_FAIL
Online log: Performance Advisory
User action: Restart the server with more resources
26
© 2014 IBM Corporation
Appendix A – New Event Alarms (cont’d)
Class Internal Name
Id
Event Description
ID
24
24015 Auto tuning was unable to extend the
buffer pool due to insufficient
resources.
ALRMU_24_ADD_BP_FAIL
Online log: Performance Advisory
User action: Restart the server with more resources
Class Internal Name
Id
Event Description
ID
85
85001 Auto tuning failed to add another
logical log, because adding another
log would exceed the maximum log
space as defined by configuration
parameter AUTO_LLOG.
ALRMU_85_OVER_LLOG
Online log: Performance Advisory
User action: Increase the maximum amount of log space by changing AUTO_LLOG
configuration parameter.
26
© 2014 IBM Corporation
Appendix A – New Event Alarms (cont’d)
Class Internal Name
Id
Event Description
ID
85
85002 Auto tuning failed to extend a
bufferpool, because the buffer pool
would exceed the maximum amount
of memory or extensions as defined
by configuration parameter
BUFFERPOOL.
ALRMU_85_OVER_BPOOL
Online log: Performance Advisory
User action: Increase the maximum amount defined by the configuration parameter
BUFFERPOOL.
26
© 2014 IBM Corporation
Appendix A – New Event Alarms (cont’d)
Class Internal Name
Id
Event Description
ID
85
85003 Auto tuning failed to add a CPU VP
because another CPU VP would
exceed the maximum number
specified in the configuration
parameter VPCLASS or exceed the
number of processors on the
computer.
ALRMU_85_OVER_CPU
User action: If there are more processors available on the computer, increase the
maximum number of CPU VPs allowed by changing the VPCLASS configuration
parameter.
26
© 2014 IBM Corporation
Other Enterprise Replication (ER) improvements
 ER now supports Hertz and Compressed timeseries
 New Event alarm for blocked replicate transactions
– New event alarm 33003 appears
•If ER transactions are blocked when a replicated table is in alter mode
268
© 2015 IBM Corporation
Appendix B – Sysadmin Tenant Catalog Table
© 2014 IBM Corporation
Appendix A – Sysadmin:tenant Table
Column
Type
Description
tenant_id
int
tenant_dbsname
varchar(128)
tenant_resources
bson
tenant_last_updated
datetime year to second
tenant_comment
lvarchar(2048)
The unique ID of the tenant
database.
The name of the tenant
database.
The properties of the tenant
database and the state of
the tenant.
Cast this column to JSON
“:JSON” to view the
information.
The time stamp of the last
configuration change to the
tenant database.
Comments about the tenant
database.
Appendix C – Spatial References Table
Scott Pickett
WW Informix Technical Sales
For questions about this presentation, email to: [email protected]
© 2014 IBM Corporation
Appendix C - spatial_references table
Column name
srid
Example value
12345
description
auth_name
Type
INTEGER NOT
NULL
VARCHAR(64)
VARCHAR(255)
auth_srid
INTEGER
4326
falsex
FLOAT NOT NULL
-180
falsey
FLOAT NOT NULL
-90
xyunits
FLOAT NOT NULL
1000000
falsez
FLOAT NOT NULL
-1000
zunits
falsem
FLOAT NOT NULL
FLOAT NOT NULL
1000
-1000
munits
srtext
FLOAT NOT NULL
CHAR(2048)
1000
GEOGCS["GCS_WGS_1984",DATUM["D_WG
S_1984", SPHEROID["WGS_1984", 6378137,
298.257223563]], PRIMEM["Greenwich",0],
UNIT["Degree", 0.0174532925199433]]
27
WGS 1984
EPSG
Description
Primary key: the unique key for the record that
describes a particular spatial reference system
A text description of the spatial reference system
The name of the standard or standards body cited for
the reference system
The ID of the spatial reference system as defined by
the authority cited in auth_name column.
The external floating point X coordinates are converted
to integers for storage in the database by subtracting
the falsex values.
The external floating point Y coordinates are converted
to integers for storage in the database by subtracting
the falsey values.
Before the external floating point X and Y coordinates
are inserted into the database, the coordinates are
scaled by the value in xyunits. The scaling process
adds a half unit and truncates the remainder.
The external floating point Z coordinates are converted
to integers for storage in the database by subtracting
the falsez values.
A factor that is used to scale the Z-coordinate
The external floating point M coordinates are
converted to integers for storage in the database by
subtracting the falsem values.
A factor that is used to scale the measure values
© 2014 IBM Corporation