LECTURE FIVE
Download
Report
Transcript LECTURE FIVE
ARTIFICIAL NEURAL NETWORKS AND NEUROSEMANTICS
人工神经元网络以及神经语义学
If eliminativism is right, then we cannot reduce types of
mental states to anything else. That means, the natural
kinds of any type of mental states cannot survive a strict
implementation of an eliminativist program.
Somehow similar to this case: from an eliminativist
perspective, the Great French Revolution is not a legitimate
label which can pick out a single historical event. Rather, it
should be viewed as a label attached to a loose collection of
the behaviors of numerous individuals. Though historians
need to rely on this label when the behaviors of individuals
are epistemologically inaccessible to them; they also need
to be prepared to abandon this label when new data are
available.
Philosophers of mind and cognitive scientists
need to be prepared to abandon the mental
vocabulary when new data about human’s
neural systems are available.
To learn something from Neuroscience;
2. To seek some possibility of making the
neurological story more universal (with, say, the
help of AI)
3. To try to reconstruct the mental architecture out
of the findings in neural science and AI.
1.
By definition, “Neurons are basic
signaling units of the nervous system of a
living being in which each neuron is a
discrete cell whose several processes are
from its cell body” .
The biological neuron has four main
regions to its structure. The cell body, or
soma, has two offshoots from it. The
dendrites (树突)and the axon (轴突)
end in pre-synaptic terminals(突触前末
端). The cell body is the heart of the cell.
It contains the nucleolus(细胞核) and
maintains protein synthesis(蛋白质合
成). A neuron has many dendrites,
which look like a tree structure, receives
signals from other neurons.
A single neuron usually has one axon, which expands off from a
part of the cell body. This I called the axon hillock(轴丘). The
axon main purpose is to conduct electrical signals generated at
the axon hillock down its length. These signals are called action
potentials(动作电位).
The other end of the axon may split into several branches, which
end in a pre-synaptic terminal. The electrical signals (action
potential) that the neurons use to convey the information of the
brain are all identical. The brain can determine which type of
information is being received based on the path of the signal.
Just similar to this case: I will send the some message to
different medias, and the authority of each media will change
the weight of what I said from the audience perspective.
Once modeling an artificial functional model from
the biological neuron, we must take into account
three basic components. First off, the synapses of the
biological neuron are modeled as weights. Let’s
remember that the synapse of the biological neuron
is the one which interconnects the neural network
and gives the strength of the connection. For an
artificial neuron, the weight is a number, and
represents the synapse. A negative weight reflects an
inhibitory connection, while positive values
designate excitatory connections. The following
components of the model represent the actual
activity of the neuron cell. All inputs are summed
altogether and modified by the weights. This activity
is referred as a linear combination. Finally, an
activation function controls the amplitude (值幅)of
the output. For example, an acceptable range of
output is usually between 0 and 1, or it could be -1
and 1.
As mentioned previously, the activation function acts
as a squashing function(压缩函数), such that the
output of a neuron in a neural network is between
certain values (usually 0 and 1, or -1 and 1). In general,
there are three types of activation functions, denoted
by Φ(.)
First, there is the Threshold
Function which takes on a
value of 0 if the summed
input is less than a certain
threshold value (v), and the
value 1 if the summed input
is greater than or equal to
the threshold value.
Secondly, there is the
Piecewise-Linear
function. This function
again can take on the
values of 0 or 1, but can
also take on values
between that depending
on the amplification
factor in a certain region
of linear operation.
This function can range
between 0 and 1, but it is
also sometimes useful to
use the -1 to 1 range. An
example of the sigmoid
function is the.
hyperbolic tangent
function(双曲正切函数).
Within neural systems it is
useful to distinguish three
types of units: input units
(indicated by an index i)
which receive data from
outside the neural network,
output units (indicated by
an index o) which send
data out of the neural
network, and hidden units
(indicated by an index h)
whose input and output
signals remain within the
neural network.
Each unit performs a relatively simple job: receive input from
neighbours or external sources and use this to compute an
output signal which is propagated to other units.
Apart from this processing, a second task is the adjustment of
the weights.
The system is inherently parallel in the sense that many units
can carry out their computations at the same time.
During operation, units can be updated either synchronously or
asynchronously. With synchronous updating, all units update
their activation simultaneously; with asynchronous updating,
each unit has a (usually fixed) probability of updating its
activation at a time t, and usually only one unit will be able to do
this at a time. In some cases the latter model has some
advantages.
Semantic content is distributed in a huge network whose
topological structure will evolve when new inputs come in,
rather than stored in a fixed location in the brain.
Or in another way around, your belief-token of something is
not encoded by this neuron of that one, but by a huge network!
What if the brain can be scanned and
mathematically re-modeled?
That means, maybe we can download
your thought and re-implement it in
another brain!
Avatars are Na'vi-human
hybrids which are
operated by genetically
matched humans. So if
the human is Jack, its
Avatar will share the
same mental states with
Jack when being
operated. Avatar –Jack
looks like a mental
duplicate of Jack.
What is meaning now?
Answer: Structured Activation Spaces as Conceptual
Frameworks!!
Jerry Alan Fodor (born
1935)
Dr. Ernest Lepore
Acting Director of the Rutgers Center
for Cognitive Science (RuCCS)
Fodor has made many and varied criticisms of holism. He identifies the
central problem with all the different notions of holism as the idea that
the determining factor in semantic evaluation is the notion of an
"epistemic bond". Briefly, P is an epistemic bond of Q if the meaning of
P is considered by someone to be relevant for the determination of the
meaning of Q. Meaning holism strongly depends on this notion. The
identity of the content of a mental state, under holism, can only be
determined by the totality of its epistemic bonds . And this makes the
realism of mental states an impossibility:
"If people differ in an absolutely general way in their estimations of
epistemic relevance, and if we follow the holism of meaning and
individuate intentional states by way of the totality of their epistemic
bonds, the consequence will be that two people (or, for that matter, two
temporal sections of the same person) will never be in the same
intentional state. Therefore, two people can never be subsumed under
the same intentional generalizations. And, therefore, intentional
generalization can never be successful. And, therefore again, there is no
hope for an intentional psychology."
http://plato.stanford.edu/entries/connectionism/
Neurophilosophy at Work
PAUL CHURCHLAND
University of California, San Diego
Chapter 8: Neurosemantics
I have sent the whole PDF of the entire book to the public email box!