Slaid_01 - narod.ru

Download Report

Transcript Slaid_01 - narod.ru

Series of lectures
“Telecommunication networks”
Lecture#10
Concluding session,
part II
Instructor: Prof. Nikolay Sokolov, e-mail: [email protected]
The Bonch-Bruevich Saint-Petersburg State
University of Telecommunications
New problems concerning throughput
Last century:
We had to have 3.4 kHz for telephony (F1), 15 kHz for sound
broadcasting (F2), and 8 MHz for TV broadcasting (F3).
So, total bandwidth with N1 channels for telephony, N2
channels kHz for sound broadcasting, and N3 channels for TV
broadcasting can be calculated by the following formula:
N1xF1+ N2xF2+ N3xF3.
Current century:
We have to have 64 kbit/s for telephony (B1), from 64 kbit/s to
2 Mbit/s sound broadcasting (B2), from 2 Mbit/s to 30 Mbit/s
for TV broadcasting (B3), and from from 2 Mbit/s to 100
Mbit/s for data transmission (B4).
Definitions related to QoS
In the Recommendation E.800 and in a number of other ITU-T documents
several similar definitions of the term "Quality of service" are formulated:
•1. Totality of characteristics of a telecommunications service that bear on its
ability to satisfy stated and implied of the user of the service (E.800).
•2. The collective effect of service performance which determine the degree of
satisfaction of a user of a service. It is characterised by the combined aspects of
performance factors applicable to all services, such as; - Service operability
performance; - Service accessibility performance; - Service retain ability
performance; - Service integrity performance; and - Other factors specific to
each service (Q.1741).
•3. The collective effect of service performances which determine the degree of
satisfaction of a user of the service (Y.101).
•4. The collective effect of service performance which determine the degree of
satisfaction of a user of a service. It is characterized by the combined aspects
of performance factors applicable to all services, such as bandwidth, latency,
jitter, traffic loss, etc (Q.1703).
Recommendation ITU-T E.800 (1)
Recommendation ITU-T E.800 (2)
Recommendation ITU-T E.800 (3)
Recommendation ITU-T E.800 (4)
Quality of service in PSTN
PSTN
Provider A
Provider B
Provider C
Loss probability,
mean delay,
noise, etc.
Loss probability,
mean delay,
noise, etc.
Loss probability,
mean delay,
noise, etc.
Problem of the transition to NGN
PSTN Operators should find viable strategy of the transition
to NGN, which provides protection of investments in circuitswitched technology.
Source: B. Jacobs. Economics of NGN deployment scenarios: discussions of
migration strategies for voice carriers. – www.ieee.org.
It is necessary to combine PSTN’s quality of service
and IP technologies’ economic efficiency!
QoS aspect: time irreversibility
Speech quality impairment compensation in
networks with circuit switching:
Elaboration of the new speech signal processing
algorithms;


Signal amplification (when necessary).
Speech quality impairment compensation in IP
networks under condition of the excessive packet
transfer delay:
Impossible
in principle!!!
Forecast (XV century)
Leonardo da Vinci
“The time will come when people from the most distant countries
will speak to one another and answer one another”.
“Inaccurate” Predictions
“This ‘telephone’ has too many shortcomings to be
seriously considered as a means of communication. The
device is inherently of no value to us”
Western Union internal memo, 1876
“I think there is a world market for maybe 5 computers”
Thomas Watson, Chairman of IBM, 1943
“There is no reason anyone would want a computer in their
home ”
Ken Olson, President, Chairman & Founder
Digital Equipment Corporation, 1977
“640K ought to be enough memory for anybody”
Bill Gates, Microsoft, 1981
Conclusion: “Prediction is very difficult, especially if it's about
the future“. (Niels Bohr, Nobel Prize in Physics in 1922.).
Two examples of trend extrapolations
Extrapolation of Trends
F2(t)
Trends for functions F1(t), F2(t)
F1(t)
Time
tr
t0
tf
Forecasting of the future and the past
Behavior of the investigated process for three ensembles
F(t)
Year
0
1
2
3
{X1}
4
5
6
7
8
{X2}
9
10
11
12
{X3}
13
Forecasting related to access networks
Number of households, millions
40,0
30,0
Narrow band lines
60%
20,0
10,0
Only mobile
20%
Only broadband
Year
0,0
2002
2007
2012
Jipp curve (1)
Jipp curve is a term for a graph plotting the number (density) of
telephones against wealth as measured by the Gross Domestic Product
(GDP) per capita. The Jipp curve shows across countries that teledensity
increases with an increase in wealth or economic development (positive
correlation), especially beyond a certain income. In other words, a
country's telephone penetration is proportional to its population's buying
power. The relationship is sometimes also termed Jipp Law or Jipp's Law.
The Jipp curve has been called "probably the most familiar diagram in the
economics of telecommunications". The curve is named after A. Jipp, who
was one of the first researchers to publish about the relationship in 1963.
The number of telephones was traditionally measured by the number of
landlines, but more recently, mobile phones have been used for the graphs
as well. It has even been argued that the Jipp curve (or rather its
measures) should be adjusted for countries where mobile phones are more
common that landlines, namely for developing countries in Africa.
Jipp curve (2)
Jipp curve (3)
Classifications of clients (1)
Income share
Portion of clients
Х1%
20%
Portion of clients
100%
Х2%
Х3%
Laggards
Late majority
20%
20%
Х4%
Early majority
20%
Х5%
20%
Early adopters
Innovators
Time
a) Ranking of clients by the level of income
b) Ranking of clients by the time of the service using
Classifications of clients (2)
Source: Telcordia Technologies
NGN as economical solution
Increase of communication Operator’s revenues is possible by
solving of two important problems. Firstly, independently or with
assistance of services Providers, it is expedient to take over another
niche, implicitly related to telecommunications business. The cases
in point are information services which, in the long run will provide
increase of Operators’ revenues. Secondly, revenues increase can be
achieved when minimizing expenses. In this instance the matter
concerns optimal ways of infocommunication system development
and perfecting of maintenance processes. Efficiency of these
processes determines, to a great extent, the level of Operational
expenses on the system management.
NGN concept – from the economic point of view can be considered
as fulfilment of new requirements of potential clients at the expense
of comparatively slight increase of CAPEX with essential decrease
of OPEX.
Net present value (NPV)
This index allows finding the correlation between
investments and future income.
Cash Flow on the input C Fin ( t ) is directed towards
network modernization, which can be considered an
investment project. As a result, output flow C Fo u t ( t ) is
generated.
CFin(t)
Network modernization
(Investment process)
CFout(t)
Examples of sensitivity analysis
NPV
NPV
x
xMIN
x0
xMAX
y
yMIN
y0
yMAX
Example of NPV (1)
NPV(t)
Payback
period
Network implementation
Network
creation
modernization
t
Example of NPV (2)
Source: ITU
Typical network planning tasks
Network planning processes
Three kinds of planning
Technical, business and operational plans
Change of the network structure
during the time period
District 1
CO4
CO1
CO1
District 2
Time
CO2
T1
CO3
District 3
T2
CO2
CO6
CO3
CO5
Two variants of operating network
structure modification
District 1
CO1
District 1
CO1
Th
am
es
e
or k
w
t
ne
District 2
CO2
e
tur
c
u
str
CO3
District 3
District 2
CO2
District 3
etw
or k
District 1
str
uct
ur e
CO1
District 2
CO2
CO4
District 4
CO3
Ne
wn
CO3
District 3
CO5
5
ict
r
t
s
Di
Access network modernization
Distribution cables
Distribution cables
Distribution cabinets
X1
Lin
inets
b
a
c
een
w
t
e
b
e
Ma
in
cab
les
X2
Main distribution frame
Classification of the queueing models (1)
In 1961 D.G. Kendall introduced the following notation for queueing
models " A / B / n ". Symbol A denotes arrival process, symbol B denotes
service (holding) time distribution, and n indicates a number of servers.
For a complete specification of a queueing system more information is
required. For these reason Kendall's notation was extended:
A/B/n/K /S / X ,
(14.1)
where:
 K is total capacity of the system, alternatively only the number of
waiting positions,
 S is number of customers,
 X is queueing discipline.
In the first position of classification (14.1) symbol M is used most often.
This means, that incoming flow is a Poisson process. For more
complicated models symbols G I (general independent time interval,
renewal arrival process) and G (general, arbitrary distribution of time
interval, may include correlation) are used.
Classification of the queueing models (2)
In the second position one of the following symbols is usually
used: M (exponential distribution of service time), D (constant
service time), E k (Erlang-k distribution of service time), H n
(Hyper-exponential of order n distribution of service time), G
(arbitrary distribution of service time). Occasionally other symbols
occur.
In a queueing system, demands can be served according to many
different principles. Usually applied disciplines are as follows:
 FCFS: first come – first served (it is also denoted as FIFO: first
in – first out),
 LCFS: last come – first served,
 SIRO: service in random order,
 SJF: shortest job first.
Classification of the queueing models (3)
In some cases, demands are divided into N priority classes.
There is difference of principle between two kinds of priorities:
non-preemptive and preemptive. For the first discipline a new
arriving demand with higher priority than a demand being
served waits until a server becomes idle (and all demands with
highest priority have been served). This discipline is also called
HOL: head-of-the-line. When using second discipline a demand
being served having lower priority than new arriving demand is
interrupted. Usually three types of serving the interrupted call
are discriminated:
1. preemptive, resume (the service is continued from where it
was interrupted),
2. preemptive, without re-sampling (the service restarts from
the beginning with the same service time),
3. preemptive, with re-sampling (the service restarts again with
a new service time).
Main algorithm of the forecasting
Problem
statement
Usage of the
forecast
Information
gathering
Decision
making
Choice of
methodology
Analysis of
the results
Forecasting
Possible approaches
Another classification:
•objective methods (QUANTITATIVE FORECASTING METHODS)
•subjective methods (QUALITATIVE FORECASTING METHODS)
•combined methods
Considered objects
Network and its attributes
Services and corresponding traffic (some parameters)
QoS (including dependability)
Capacity of the trunks
Throughput of the switches
Forecasting and time
Main attributes
QoS parameters
Services and traffic
Throughput and
capacity
Short-term forecast
Medium-term forecast
Long-term forecast
Forecasting and lifetime
Lifetime of the switching equipment
Lifetime of the access network
Lifetime of the terminals
Post-NGN time
Methods of the forecasting (1)
Genius forecasting – This method is based on a combination of
intuition, insight, and luck. Psychics and crystal ball readers are the
most extreme case of genius forecasting. Their forecasts are based
exclusively on intuition. Science fiction writers have sometimes
described new technologies with uncanny accuracy.
There are many examples where men and women have been
remarkable successful at predicting the future. There are also many
examples of wrong forecasts. The weakness in genius forecasting is
that its impossible to recognize a good forecast until the forecast has
come to pass.
Some psychic individuals are capable of producing consistently
accurate forecasts. Mainstream science generally ignores this fact
because the implications are simply too difficult to accept. Our
current understanding of reality is not adequate to explain this
phenomena.
Methods of the forecasting (2a)
Trend extrapolation – These methods examine trends and
cycles in historical data, and then use mathematical techniques
to extrapolate to the future. The assumption of all these
techniques is that the forces responsible for creating the past,
will continue to operate in the future. This is often a valid
assumption when forecasting short term horizons, but it falls
short when creating medium and long term forecasts. The
further out we attempt to forecast, the less certain we become of
the forecast.
There are many mathematical models for forecasting trends and
cycles. Choosing an appropriate model for a particular
forecasting application depends on the historical data. The study
of the historical data is called exploratory data analysis. Its
purpose is to identify the trends and cycles in the data so that
appropriate model can be chosen.
Methods of the forecasting (2b)
The most common mathematical models involve various forms of
weighted smoothing methods. Another type of model is known as
decomposition. This technique mathematically separates the
historical data into trend, seasonal and random components. A
process known as a "turning point analysis" is used to produce
forecasts. ARIMA models such as adaptive filtering and Box-Jenkins
analysis constitute a third class of mathematical model, while simple
linear regression and curve fitting is a fourth.
The common feature of these mathematical models is that historical
data is the only criteria for producing a forecast. One might think
then, that if two people use the same model on the same data that the
forecasts will also be the same, but this is not necessarily the case.
Mathematical models involve smoothing constants, coefficients and
other parameters that must decided by the forecaster. To a large
degree, the choice of these parameters determines the forecast.
Methods of the forecasting (3a)
Consensus methods – Forecasting complex systems often
involves seeking expert opinions from more than one person.
Each is an expert in his own discipline, and it is through the
synthesis of these opinions that a final forecast is obtained.
One method of arriving at a consensus forecast would be to put
all the experts in a room and let them "argue it out". This
method falls short because the situation is often controlled by
those individuals that have the best group interaction and
persuasion skills.
Methods of the forecasting (3b)
A better method is known as the Delphi technique. This method seeks
to rectify the problems of face-to-face confrontation in the group, so the
responses and respondents remain anonymous. The classical technique
proceeds in well-defined sequence. In the first round, the participants
are asked to write their predictions. Their responses are collated and a
copy is given to each of the participants. The participants are asked to
comment on extreme views and to defend or modify their original
opinion based on what the other participants have written. Again, the
answers are collated and fed back to the participants. In the final round,
participants are asked to reassess their original opinion in view of those
presented by other participants.
The Delphi method general produces a rapid narrowing of opinions. It
provides more accurate forecasts than group discussions. Furthermore, a
face-to-face discussion following the application of the Delphi method
generally degrades accuracy.
Methods of the forecasting (4)
Simulation methods – Simulation methods involve using analogs
to model complex systems. These analogs can take on several forms.
A mechanical analog might be a wind tunnel for modeling aircraft
performance. An equation to predict an economic measure would be
a mathematical analog. A metaphorical analog could involve using
the growth of a bacteria colony to describe human population
growth. Game analogs are used where the interactions of the players
are symbolic of social interactions.
Mathematical analogs are of particular importance to futures
research. They have been extremely successful in many forecasting
applications, especially in the physical sciences. In the social
sciences however, their accuracy is somewhat diminished. The
extraordinary complexity of social systems makes it difficult to
include all the relevant factors in any model.
Methods of the forecasting (5)
Scenario – The scenario is a narrative forecast that describes a potential
course of events. Like the cross-impact matrix method, it recognizes the
interrelationships of system components. The scenario describes the
impact on the other components and the system as a whole. It is a
"script" for defining the particulars of an uncertain future.
Scenarios consider events such as new technology, population shifts, and
changing consumer preferences. Scenarios are written as long-term
predictions of the future. A most likely scenario is usually written, along
with at least one optimistic and one pessimistic scenario. The primary
purpose of a scenario is to provoke thinking of decision makers who can
then posture themselves for the fulfillment of the scenario(s). The three
scenarios force decision makers to ask: 1) Can we survive the
pessimistic scenario, 2) Are we happy with the most likely scenario, and
3) Are we ready to take advantage of the optimistic scenario?
Methods of the forecasting (6)
Decision trees – Decision trees originally evolved as graphical devices
to help illustrate the structural relationships between alternative
choices. These trees were originally presented as a series of yes/no
(dichotomous) choices. As our understanding of feedback loops
improved, decision trees became more complex. Their structure
became the foundation of computer flow charts.
Computer technology has made it possible create very complex
decision trees consisting of many subsystems and feedback loops.
Decisions are no longer limited to dichotomies; they now involve
assigning probabilities to the likelihood of any particular path. Decision
theory is based on the concept that an expected value of a discrete
variable can be calculated as the average value for that variable. The
expected value is especially useful for decision makers because it
represents the most likely value based on the probabilities of the
distribution function.
Methods of the forecasting (7)
Combining Forecasts
It seems clear that no forecasting technique is appropriate for all situations.
There is substantial evidence to demonstrate that combining individual
forecasts produces gains in forecasting accuracy. There is also evidence that
adding quantitative forecasts to qualitative forecasts reduces accuracy.
Research has not yet revealed the conditions or methods for the optimal
combinations of forecasts.
Judgmental forecasting usually involves combining forecasts from more than
one source. Informed forecasting begins with a set of key assumptions and
then uses a combination of historical data and expert opinions. Involved
forecasting seeks the opinions of all those directly affected by the forecast
(e.g., the sales force would be included in the forecasting process). These
techniques generally produce higher quality forecasts than can be attained
from a single source.
Combining forecasts provides us with a way to compensate for deficiencies in
a forecasting technique. By selecting complementary methods, the
shortcomings of one technique can be offset by the advantages of another.
Example of Delphi technique
The question is: how many Y-terminals will be installed up to
2000 year?
1. Ten experts sent the following estimations: 1 million (5
opinions), 1.2 million (3 opinions), 1.4 million (2 opinions).
2. Mean value is N
(1)

1.0  5  1.2  3  1.4  2
 1.14
10
(1.0  1.14)  5  (1.2  1.14)  3  (1.4  1.14)  2
2
3. Variance is
σ 
2
4. Coefficient of variation is
2
10
k 
σ
N
Conclusion: forecast is stable.
(1)
 0.137
2
 0.0244
Dependability
Strictly speaking, dependability should be considered as one of
quality aspects. Nevertheless, some specialists consider
dependability as an independent term that has the same status
as the quality. The dependability is the property of an object to
retain, in a course of time, within specified limits values of all
parameters, which characterize capability to perform required
functions in predetermined for that object regimes and
conditions of application, technical maintenance, repairs,
storage and transportation. It is obvious, that there is no sense
in speaking about object’s dependability during the time
periods, when it is withdrawn from operation for execution of
scheduled inspections, modernization and other procedures.
Dependability vs cost
Dependability and type of service
Statistics of the dependability
Intensity of failures
λ(t)
Gradual
Failures
(wear-out
region)
Sudden
failures
(infant
mortality
region)
λ(t) ≈ const
(steady-state region)
t
t1
t2
Dependability of the access network
Noise
48%
12%
24%
Break of the
subscriber line
Source: ISO
16%
Absence of
call request signal
or ring signal
Signal about
overloading
Reservation of infocommunication
system on the level of access network
Base station
Wireless access
PC
Core Network
Wireline access
Examples of dependability analysis
A=?
A=?
A=p+p2–p3
A=2p5–5p4+2p3+2p2
If p=0.999,
then A=0.999998001
If p=0.999,
then A=0.999997998
If p=0.9,
then A=0.981
If p=0.9,
then A=0.978
Comparison of the scenarios
Criterion I
Scenario 2
Scenario 1
Criterion V
Criterion II
Criterion VI
Criterion III
Network modernisation
Investigation
Elaboration of the
main solutions
Technical
maintenance
Implementation
of the concept
Production of
equipment
New concept
Modernisation
Network
planning
Network planning and data mining
Operational system development (e.g. network)
?
Information from feedback
loops (e.g. statistics)
Forecasting (e.g.
number of users)
Data mining
Concluding session, part II
Questions?
Instructor: Prof. Nikolay Sokolov, e-mail: [email protected]