Slide Show - NKD Group
Download
Report
Transcript Slide Show - NKD Group
URI Neuroscience Colloquium
February 25, 2011
Neuroeconomics at URI: The CBA
Student Directed Hedge Fund with
“Big Data” Informatics
By
Gordon H. Dash, Jr.1 and Nina Kajiji2
2The
1College
of Business, University of Rhode Island
NKD Group, Inc. and Computer Science and Statistics, University of Rhode Island.
www.GHDash.net
[email protected]
www.ninakajiji.net
[email protected]
Credits
• CBA - College of Business Administration
• Hedge Fund – BUS 423-II, sec 02 “Student
Directed Investment Fund”
• “Big Data” Informatics – URI Computer
Science Department is heading a drive for a
new tract.
• RANN – Radial Basis function artificial neural
network
URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji
2
Dual Purpose
• To present the theory and application of a
concentric RANN real-time automated
trading (AT) algorithm and its ability to
produce profitable trades.
• Using high frequency dimensions to represent
low-frequency Fama-French-Carhart (FFC)
firm fundamental variables, we estimate scale
elasticity metrics to explain profitable trades
produced by the AT algorithm.
URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji
3
AT System Development for Strategic Effects
• The AT algorithm is predicated on a four phase
concentric strategic decision cycle that is
capable of responding to various forecasts of
future events (OODA):
1. Observation (of markets)
2. Orientation by neruoeconomic informatics (align
forecasts with reality)
3. Decision on asset position (open / close / hold ?)
4. Action (execute trade and store record structure)
URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji
4
A Neural Network (AI) Primer
Artificial Intellect:
Who is stronger and why?
Applied Problems:
•Image, Sound, and Pattern recognition
•Financial Time Series Mapping
•Decision making
Knowledge discovery
Context-Dependent Analysis
…and more…
NEURO-INFORMATICS
- modern theory about principles and new
mathematical models of information processing,
which based on the biological prototypes and
mechanisms of human brain activities
URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji
5
Concentric OODA
URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji
6
Justification
•
BIG DATA: “…high frequency trading firms can generate more than a million
trades in a single day … more than 50 percent of equity market volume. Stated
another way: a firm that trades one million times per day may submit 90 million or
more orders that are cancelled.” Mary L. Schapiro, Chairman, U.S. Securities and
Exchange Commission, 07-Sep-2010.
•
INFORMATICS: “The need for valid statistical tools is greater than ever; data sets
are massive, often measuring hundreds of thousands of measurements for a single
subject…With high-dimensional data, ... the correct specification of the parametric
model an impossible challenge … The new generation of statisticians cannot be
afraid to go against standard practice … The science of learning from data (i.e.
statistics) is arguably…one in which we try to understand the very essence of
human beings.” Mark van der Laan, Jiann-Ping Hsu, Sherri Rose, AMSTATNEWS,
September 2010.
URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji
7
An Intelligent Data Analysis Experiment
Interpretation
and
Decision Making
Data
Informatics
Data
Acquisition
Signals
&
parameters
Data
Acquisition
Rules
&
Knowledge
Productions
Characteristics
&
Estimations
Adaptive Machine Learning
via Neural Network
Data
Informatics
Decision
Making
“Big Data” KnowledgeBase
URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji
8
Principles of Neurocomputing
Learning and Adaptation
NN are capable to adapt themselves (the synapses connections
between units) to special environmental conditions by changing
their structure or strengths connections.
y x
2
Non-Linear Functionality
Every new state of a neuron is a nonlinear function of the
input pattern created by the firing nonlinear activity of the
other neurons.
Robustness of Associability
NN states are characterized by high robustness or
insensitivity to noisy and fuzzy input data owing to use of a
highly redundant distributed structure.
URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji
9
Artificial Neural Networks
Single Layer (e.g., the RBF)
Inputs
Output
An artificial neural network is composed of many artificial neurons that
are linked together according to a specific network architecture. The
objective of the neural network is to transform the inputs into meaningful
outputs.
URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji
10
Neural Network Mathematics
Inputs
Output
y11 2
1 y1 f ( y1 , w12 )
y32
2
2
3
y 12 f ( x 2 , w12 ) y1 y 2 2
2
1
2
y
f
(
y
,
w
y
y
y 2 f ( y , w2 )
Out
1)
3
1
2
y 31 f ( x3 , w31 )
y3 2
1
2
y3
1 y3 f ( y , w3 )
y4
y 14 f ( x 4 , w14 )
y11 f ( x1 , w11 )
URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji
11
Learning with RBF Neural Networks
M
RBF neural network:
yout F ( x, W ) wk2 e
|| x w1,k ||2
2( ak ) 2
k 1
Data:
Error:
( x1 , y1 ), ( x 2 , y2 ),...,( x N , y N )
M
E (t ) ( y (t ) out yt ) ( wk2 (t ) e
2
|| x t w1,k ||2
2 ( ak ) 2
k 1
Learning:
yt ) 2
E (t )
w (t 1) w (t ) c
wi2
2
i
2
i
E (t )
t
2
(
F
(
x
,W (t )) yt ) e
2
wi
|| x t w1,i ||2
2 ( ai ) 2
Only the synaptic weights of the output neuron are modified.
An RBF neural network learns a nonlinear function.
URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji
12
Bayesian Network Training
•
•
•
Conventional training can be interpreted in statistical terms as variations on
maximum likelihood estimation. The idea is to find a single set of weights for the
network that maximizes the fit to the training data, perhaps modified by some sort
of weight penalty to prevent overfitting.
Ideal Bayesian training:
– Before the start of data analysis obtain prior opinions about what the true
relationship might be expressed as a probability distribution over the network
weights that define this relationship.
– After training the network collect revised opinions as a posterior distribution
over network weights.
– Exact analytical methods for models as complex as neural networks are out of
the question
Practical Bayesian Training:
– Find the weights that are most probable, using methods similar to
conventional training with regularization, and then approximate the
distribution over weights using information available at this maximum.
– Approximation is preferred for computational efficiency instead of using a
Monte Carlo method to sample from the distribution of the weights.
URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji
13
K4-RBF ANN (RANN) Extensions
• The Kajiji (2001) extension to the traditional RANN
specification introduced multiple objectives within a Bayesian
RANN framework.
• Efficient mapping. By adding a weight penalty or Tikhonov’s
regularization parameter to the SSE optimization objective,
the modified SSE is restated as the following cost function:
• Optimal Weight Decay. Additionally, Kajiji proposed a closedform solution to the estimation of the weight parameter
based on Hemmerle’s extensions to the traditional ridgeregression parameters and Crouse’s incorporation of priors:
• Under the Kajiji specification the function to be minimized is
stated as:
URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji
14
K4 RANN Contributions
• Excellent mapping capabilities of the K4-RANN
topology allows for better financial forecasts
for a system exhibiting Brownian motion with
drift and volatility – such as stock price
movements.
• The algorithmic speed of the generalized RBF
is greatly enhanced by the K4 enhancements
making it suitable for HF valuation in N
dimensional space.
URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji
15
Developing an Auto Trader using
the K4-RANN
URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji
16
Trading System Evolution
Business
Decision
Theory
MCDA
Mathematical
Finance
Automated Trading System
(WINKS)
• To assign stocks to wealth-building
groups based on the historical
trading performance of WINKS.
• To define the operational
characteristics of the HF Trading
system -- WINKS.
• To estimate the relative quasielasticity of firm-fundamental
metrics to explain the production of
WINKS trading profitability.
URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji
18
Investor Trading Behavior
We assume the Investor’s probability space is a continuous-time
stochastic process (Kajiji and Forman, 2013):
• Let (Ω, θ, P) be the measure space.
• Let P(Ω) = 1.
• Then the probability space is (Ω, θ, P) with a filtration of {Γt : t
є [0, ∞]}. This is an increasing sequence of σ-algebras of Γ
that determines the relevant timing of information.
• That is, Γt is loosely viewed as the set of events whose
outcomes are certain to be revealed to investors as true or
false by, or at, time t.
URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji
19
Model for Equity Trading Profits
• Let Xt represent an individual stock’s market price at time t
• We assume that the price process of X follows a geometric
Brownian motion with a constant drift and volatility.
• Let θ represent a trading strategy.
• Let θt (ω) be the quantity of each security held in each state ω
є Ω and at each time t.
• We assume that the trading strategy can only make use of
available information at any time t. This prevents the
possibility of unlimited gains through uncontrolled highfrequency trading or flash-crash trading – i.e. θ is adapted.
• We thus can define the total financial gain as:
between any time s and r – (Ito’s stochastic integral).
URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji
20
Simple Buy and Hold Strategy (BH)
• Position is initiated after some stopping time T and
closed at a later stopping time U.
• For a position size θt (ω), the trading strategy θ, is
defined by θt = 1(T<t<U) θt (ω).
• By definition, the gain from simple BH is:
URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji
21
The N-Dimensional Trading Strategy
• Suppose we have n different securities with
price process: X1, … Xn and a trading strategy
of θ = (θ1, … θn ) then the total gain is:
URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji
22
The WINKS OOAD Algorithm
URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji
23
The WINKS Trading Model
• The high frequency model implemented in the
automated trader is:
• yi = f(x1, x2)
• Where:
• yi = Ln(1+ri) -- ri is the return on security i
• X1 = Lag(Ln(1 + rVXX))
• X2 = Lag(Ln(1 + rPLW))
• Note:
– VXX = iPath S&P 500 VIX Short-Term Futures (ETN)
– PLW = PowerShares 1-30 Laddered Treasury (ETF)
URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji
24
Results: The BH Strategy v/s HF K4-RANN AT
A Sample of Trading Profitability, June 01, 2009 to March 19, 2010
The database of 2,225 securities was established by random selection
from the 8,000 tickers followed by Yahoo! Finance.
These five were chosen for demonstration as they have the some of the
highest trading profit.
Trading Efficiency = (difference between BH and Trading Profit) / Ini.
Investment) *100.
Note: All equities positions were created with an investment of $1000
URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji
25
Trading Report
Jul 1st, 2010
URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji
26
Summary Report: Page 1
Jul 1st, 2010
URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji
27
Summary Report: Page 2
Jul 1st, 2010
URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji
28
Trading History – Ticker = ADS
Jul 1st, 2010
URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji
29
Trading Summary – Ticker = ADS
Jul 1st, 2010
URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji
30
Do Firm Fundamentals Explain Profitable Trades?
• Goal: To use the K4 mapping efficiency to estimate non-parametric quasielasticity metrics of individual firm fundamental variables in the
production of K4 trading profitability (see Dash and Kajiji, 2008 for
univariate detail and Kajiji and Dash, 2013 for multivariate extensions).
• Which firm fundamental variables to use?
– Fama & French (1993) found a three factor model efficiently modeled the excess
returns of individual firms.
– Subsequently, Carhart (1997) extended the FF model by including a fourth factor.
– We extrapolate from the low frequency principle of the Carhart four factor model
by incorporating a real-time HF moving average:
•
•
•
•
Vasicek’s Beta
(Bayesian corrected Beta)
Market Capitalization
Book to Last Trade Price
Percent change from 50 day Moving Average
Standard CAPM
FFM added factor
FFM added factor
Carhart extension
URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji
31
K4- Estimation of Quasi-Scale Economies
• Establish historical period: 01-Jun-2009 through 19-Mar-2010, inclusive.
• Create research sample (SAM):
– Obtain 20-minute price observations for 2,225 securities.
– Eliminate all non-equity stocks, and stocks that do not have fundamental
information on Yahoo! -- sample size reduced to 1,765.
– For efficient cross-sectional modeling we sample from within the full content
population. The data sampling is guided by the use of the target variable of
the study – percent positive trades (PPT).
– 793 securities form the training set; 972 securities form the validation set.
• For SAM, obtain stock fundamentals (source: Yahoo Finance)
–
–
–
–
Vasicek’s Beta - created from reported Yahoo beta.
Market Capitalization
Book to Last Trade Price
Percent change from 50 day Moving Average
(P1)
(P2)
(P3)
(P4)
URI-Neuroscience Colloquium 25-Feb-2013 – Dash and
Kajiji
32
Results
Number of Positive Trades by Security for SAM of 1,765 Securities
Notice that the %positive trades form a band between 30% and 70%
URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji
33
The Production Model
• Use K4 to estimate the double-log production theoretic model for positive
trades.
• pi = f(P1, P2, P3, P4)
• Where:
• pi = Ln(PPT)
• P1, P2, P3, P4 = Ln transformation of indicator variables as previously
defined
• K4-RBF implemented with a softmax transfer function
• K4 RBF weights are interpreted as quasi-production elasticity estimates
• Sum to capture system returns to scale for profitable trading
URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji
34
A Sample of Alternative Modeling Results
K4-RBF Analysis Using Softmax Transfer Function
Dependent Variable: Ln(% Positive Trades)
Indicator Variables: Ln(Fundamental Variables)
URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji
35
Results: FFC Mapping of % Positive Trades
Actual and Predicted
URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji
36
Results: FFC Quasi-Elasticity Metrics
Comparative K4 RBF Weights from the Alternative Models
Dependent Variable: Ln(% Positive Trades)
•
•
•
•
Model selected – Norm2
Except for the variable Market Capitalization, all other variables show an
inverse relationship to PPT. That is, as firm market capitalization increases so
does the expected performance of the WINKS K4-AT.
Interestingly any increase in Book Value to Price tends to lessen the probability
that WINKS will trade the stock profitably.
The WINKS K4-AT exhibits decreasing returns to scale of 0.484 units. This
implies a decrease in PPT given a simultaneous and proportionate change in all
indicator variables (factors of production).
URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji
37
Summary
• This research provides a synthesis of stochastic equity price behavior and
the cognitive science of trading by implementation of the dual objective
K4-RBF ANN as incorporated in WINKS.
– WINKS is an MCDA trading algorithm that integrates the AI properties of two
unique, but coordinated, high-frequency RBF ANNs.
– Test results produced transaction cost adjusted trading profits that exceeded
those generated by the simple buy-hold strategy.
• WINKS performance was modeled using a four-factor firm pricing model
to estimate the system-wide returns to scale of PPT.
• The results suggest that the portfolio selection of stocks based on the
estimated quasi-elasticity coefficients would greatly enhance trading
profits. A test of this conclusion is left for future research.
URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji
38
Future Research
• Development of intelligent interfaces between
“investor” and AT parameterization
• Use of psychology-based theories to explain
human response to high frequency trade
signals associated with capital market events
• Interrogation of reasoning under uncertainty
or imprecision
URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji
39
References
1.
2.
3.
4.
2.
3.
4.
6.
7.
9.
12.
13.
14.
15.
Dash Jr., G.H. and N. Kajiji, Operations Research Software. Vol. I and II. 1988, Homewood, Illinois: IRWIN.
Dash Jr., Gordon H., and Kajiji, Nina. Engineering a Generalized Neural Network Mapping of Volatility Spillovers in European
Government Bond Markets, Handbook of Financial Engineering, Series: Springer Optimization and Its Applications, Vol. 18,
Edited By: C. Zopounidis, M. Doumpos, and P. Pardalos, Springer, 2008
Kajiji, N., and Forman, J., Production of Efficient Wealth Maximization Using Neuroeconomic Behavioral Drivers and
Continuous Automated Trading, Edited by: N. Thomaidis, and G. Dash, Recent Advances in Computational Finance, Nova
Science Publishers, Inc., 2013.
Kajiji, Nina and Dash Jr., Gordon H. Computational Practice: Multivariate Parametric or Nonparametric Modeling of European
Bond Volatility Spillover? Recent Advances in Computational Finance, Edited by: N. Thomaidis, and G. Dash. Nova Science
Publishers, Inc., 2013.
Forman, J., Essentials of Trading: From the Basics to Building a Winning Strategy. Wiley Trading. 2006, Hoboken, New Jersey:
John Wiley & Sons, Inc.
Brock, W., J. Lakonishok, and B. LeBaron, Simple Technical Trading Rules and the Stochastic Properties of Stock Returns. The
Journal of Finance, 1992. 47(5): p. 1731-1764.
Refenes, A.-P.N., et al., eds. Neural Networks in Financial Engineering. 1996, World Scientific: Singapore.
Kajiji, N., Adaptation of Alternative Closed Form Regularization Parameters with Prior Information to the Radial Basis
Function Neural Network for High Frequency Financial Time Series, in Applied Mathematics. 2001, University of Rhode
Island: Kingston.
Broomhead, D.S. and D. Lowe, Multivariate Functional Interpolation and Adaptive Networks. Complex Systems, 1988. 2: p.
321-355.
Lohninger, H., Evaluation of Neural Networks Based on Radial Basis Functions and Their Application to the Prediction of
Boiling Points from Structural Parameters. Journal of Chemical Information and Computer Sciences, 1993. 33: p. 736-744.
Tikhonov, A. and V. Arsenin, Solutions of Ill-Posed Problems. 1977, New York: Wiley.
Hoerl, A.E. and R.W. Kennard, Ridge Regression: Biased Estimation for Nonorthogonal Problems. Technometrics, 1970. 12(3):
p. 55-67.
Hemmerle, W.J., An Explicit Solution for Generalized Ridge Regression. Technometrics, 1975. 17(3): p. 309-314.
Crouse, R.H., C. Jin, and R.C. Hanumara, Unbiased Ridge Estimation with Prior Information and Ridge Trace. Communication
in Statistics, 1995. 24(9): p. 2341-2354.
URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji
40
• Questions
URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji
41