Transcript now

Stem cells:
Research in the human stem cell field grew out of
findings by Canadian scientists Ernest A. McCulloch
and James E. Till in the 1960s.
2000s - Several reports of adult stem cell plasticity are
published.
Stem cells have the remarkable potential to develop into many different cell
types in the body. Serving as a sort of repair system for the body, they can
theoretically divide without limit to replenish other cells as long as the person
or animal is still alive. When a stem cell divides, each new cell has the
potential to either remain a stem cell or become another type of cell with a
more specialized function, such as a muscle cell, a red blood cell, or a brain
cell.
Stem cells have two important characteristics that distinguish them from other
types of cells. First, they are unspecialized cells that renew themselves for
long periods through cell division. The second is that under certain physiologic
or experimental conditions, they can be induced to become cells with special
functions such as the beating cells of the heart muscle or the insulin-producing
cells of the pancreas.
Scientists primarily work with two kinds of stem cells from animals and
humans: embryonic stem cells and adult stem cells, which have different
functions and characteristics
(more than 9 weeks)
Human embryonic and adult stem cells each have advantages and
disadvantages regarding potential use for cell-based regenerative
therapies. Of course, adult and embryonic stem cells differ in the number
and type of differentiated cells types they can become. Embryonic stem
cells can become all cell types of the body because they are pluripotent.
Adult stem cells are generally limited to differentiating into different cell types
of their tissue of origin. However, some evidence suggests that adult stem
cell plasticity may exist, increasing the number of cell types a given adult
stem cell can become.
Large numbers of embryonic stem cells can be relatively easily grown in
culture, while adult stem cells are rare in mature tissues and methods for
expanding their numbers in cell culture have not yet been worked out. This
is an important distinction, as large numbers of cells are needed for stem cell
replacement therapies.
A potential advantage of using stem cells from an adult is that the patient's
own cells could be expanded in culture and then reintroduced into the patient.
The use of the patient's own adult stem cells would mean that the cells
would not be rejected by the immune system. This represents a significant
advantage as immune rejection is a difficult problem that can only be
circumvented with immunosuppressive drugs.
Embryonic stem cells from a donor introduced into a patient could cause
transplant rejection. However, whether the recipient would reject donor
embryonic stem cells has not been determined in human experiments.
Blood Stem Cell:
Whereas other types of cells in the body have a limited lifespan and die after
dividing their endowed number of times, a stem cell can reproduce forever.
The stem cell is immortal (in cellular terms). A stem cell can forgo
immortality and turn into an ordinary blood cell, a red blood cell (an
erythrocyte), a white blood cell (a leukocyte), or a large cell (a
megakaryocyte) that fragments into the platelets needed for blood to clot.
Cardiac muscle contractions do not require nerve stimulation. The cells are
specialized to contract rhythmically on their own. The internal control system
in the heart serves to coordinate the muscle cell contractions to produce an
organized productive heart beat. The external nerve supply to the heart is
autonomic. It serves to modify cardiac contractions to meet changing body
needs.
cardiac muscle (microscopic)
Visceral smooth muscle is found primarily in the walls of hollow
abdominal organs such as the intestine, urinary bladder and uterus. The
cells are linked together in large sheets of cells that contract together - no
fine movements are possible. Visceral smooth muscle does not require
external nerve stimulation for contraction, but external autonomic nerves
serve to modify contractions.
Multi-unit smooth muscle occurs in small individual units - the cells are
not linked into large sheets. This type of smooth muscle requires an
external nerve supply to initiate its contractions. It is found where small fine
contractions are needed, such as the iris and cilia body of the eye.
smooth muscle (whole mount)
Synapse
Output
Neuron
(a) Newborn
(b) 6 Months
Nerve Cell
(c) 24 Months
Transport Selection by Neural net
Distance
Money in Purse (code)
Near(0)
Little Far (0.5)
Walk
Walk
Far(1)
Walk
Bus
Bus
Bus
Input
Money in Purse
Input
Money in
Purse
Distance
Taxi
Output
Distance
Walk
Bus
Output
Walk
Bus
Taxi
Taxi
Trip Plan by Neural Net
Days
Budget
200$ less
500$ less
1000$ less
2000$ less
4 days
6 days
Domestic
Domestic
Input
10 days
Domestic
Hawaii
Hawaii
Hawaii
Europe
Europe
Input
Budget
8 days
Output
Days
Domestic
Hawaii
Output
Domestic Trip
Budget
Hawaii
Days
Europe Trip
Europe
A genetic algorithm (GA) is a search technique used in computing to find
exact or approximate solutions to optimization and search problems. Genetic
algorithms are categorized as global search heuristics. Genetic algorithms
are a particular class of evolutionary algorithms (also known as
evolutionary computation) that use techniques inspired by evolutionary
biology such as inheritance, mutation, selection, and crossover (also
called recombination).
Genetic algorithms in particular became popular through the work of John
Holland in the early 1970s, and particularly his 1975 book. His work
originated with studies of cellular automata, conducted by Holland and his
students at the University of Michigan.
Try 3 kinds of change by some probability
until total N unit pieces
pieces
pieces
crossover
N
A
N
A
B
B
mutation
copy
G
times
Crossover:
A: 01001 11010 →
01001 01011
B: 10101 01011 →
10101 11010
Solution:
Selection of the
highest adaptation unit
after G times repetition
Reproduction:
Main articles: crossover (genetic algorithm) and mutation (genetic
algorithm)
The next step is to generate a second generation population of solutions
from those selected through genetic operators: crossover (also called
recombination), and/or mutation.
For each new solution to be produced, a pair of "parent" solutions is selected
for breeding from the pool selected previously. By producing a "child"
solution using the above methods of crossover and mutation, a new
solution is created which typically shares many of the characteristics of its
"parents". New parents are selected for each child, and the process
continues until a new population of solutions of appropriate size is generated.
These processes ultimately result in the next generation population of
chromosomes that is different from the initial generation. Generally the
average fitness will have increased by this procedure for the population,
since only the best organisms from the first generation are selected for
breeding, along with a small proportion of less fit solutions, for reasons
already mentioned above.
A cellular automaton (plural: cellular automata) is a discrete model studied
in computability theory, mathematics, and theoretical biology. It consists of a
regular grid of cells, each in one of a finite number of states. The grid can be
in any finite number of dimensions. Time is also discrete, and the state of a
cell at time t is a function of the states of a finite number of cells (called its
neighborhood) at time t − 1. These neighbors are a selection of cells relative
to the specified cell, and do not change (though the cell itself may be in its
neighborhood, it is not usually considered a neighbor). Every cell has the
same rule for updating, based on the values in this neighbourhood. Each time
the rules are applied to the whole grid a new generation is created.
a cellular automata pattern on its shell
Cephalopods
Neural networks can be used as cellular automata, too. The complex
moving wave patterns on the skin of cephalopods are a good display of
corresponding activation patterns in the animals' brain.
Cellular Automaton
Rules:
0
1
0
1
1
0
1
Start
Fractal
0
Start
Rules:
3.ARIMA
The ARIMA procedure analyzes and forecasts equally spaced univariate time
series data, transfer function data, and intervention data using the Auto
Regressive Integrated Moving-Average (ARIMA) or autoregressive movingaverage (ARMA) model. An ARIMA model predicts a value in a response time
series as a linear combination of its own past values, past errors (also called
shocks or innovations), and current and past values of other time series.
The ARIMA approach was first popularized by Box and Jenkins, and ARIMA
models are often referred to as Box-Jenkins models. The general transfer
function model employed by the ARIMA procedure was discussed by Box and
Tiao (1975). When an ARIMA model includes other time series as input
variables, the model is sometimes referred to as an ARIMAX model. Pankratz
(1991) refers to the ARIMAX model as dynamic regression. The ARIMA
procedure provides a comprehensive set of tools for univariate time series
model identification, parameter estimation, and forecasting, and it offers great
flexibility in the kinds of ARIMA or ARIMAX models that can be analyzed. The
ARIMA procedure supports seasonal, subset, and factored ARIMA models;
intervention or interrupted time series models; multiple regression analysis with
ARMA errors; and rational transfer function models of any complexity.
average yearly temperature in the north grove
compared with year 1880
Year
Internet Intrusion Detection in 2003
(IP Address)
11
1
10
1 USA
2 China
3 Korea
4 Holland
5 Japan
6 England
7 Brazil
8 Czech
9 Canada
10 Australia
11 Others
9
8
7
6
5
4
3
2
Forecast by ARIMA Model
Intrusion Times
Forecast
Real Value
1
10 11
20 21
Date in Dec, 2003
31
4.Frame Problem
(McCarthy and Hayes 1969 )
To most AI researchers, the frame problem is the challenge of representing
the effects of action in logic without having to represent explicitly a large
number of intuitively obvious non-effects. To many philosophers, the AI
researchers' frame problem is suggestive of a wider epistemological issue,
namely whether it is possible, in principle, to limit the scope of the reasoning
required to derive the consequences of an action.
The frame problem is the problem of how a rational agent bounds the set of
beliefs to change when an action is performed. This problem originates from
artificial intelligence, where it is formulated as the problem of avoiding to
specify all conditions that are not affected by actions, in the context of
representing dynamical domains in a formal logic.
Bring a Picture
No.1 Robot
Condition of thinking:
Robot
Picture
Car
Bomb
Condition of other thing:
Ceiling
Wall
Floor
Door
Electricity
:
:
<Infinite condition>
1. Bring a picture
with bomb
No.2 Robot
2. explosion
during thinking
Many Condition
Solution
Action
Grouping, Selection, Neglect, etc
5.Perceptron
The perceptron is a type of artificial neural network invented in 1957 at the
Cornell Aeronautical Laboratory by Frank Rosenblatt. It can be seen as the
simplest kind of feedforward neural network: a linear classifier.
A perceptron is a connected network that simulates an associative memory.
The most basic perceptron is composed of an input layer and output layer of
nodes, each of which are fully connected to the other. Assigned to each
connection is a weight which can be adjusted so that, given a set of inputs to
the network, the associated connections will produce a desired output. The
adjusting of weights to produce a particular output is called the "training" of
the network which is the mechanism that allows the network to learn.
Perceptrons are among the earliest and most basic models of artificial neural
networks, yet they are at work in many of todayís complex neural net
applications
The perceptron is a kind of binary classifier that maps its input x (a binary
vector) to an output value f(x) (a single binary value) calculated as
where w is a vector of real-valued weights and is the dot product (which
computes a weighted sum). b is the 'bias', a constant term that does not
depend on any input value.
The value of f(x) (0 or 1) is used to classify x as either a positive or a negative
instance, in the case of a binary classification problem. The bias can be
thought of as offsetting the activation function, or giving the output neuron a
"base" level of activity. If b is negative, then the weighted combination of
inputs must produce a positive value greater than − b in order to push the
classifier neuron over the 0 threshold. Spatially, the bias alters the position
(though not the orientation) of the decision boundary.
Since the inputs are fed directly
6.Data Mining
Generally, data mining (sometimes called data or knowledge discovery) is
the process of analyzing data from different perspectives and summarizing it
into useful information - information that can be used to increase revenue,
cuts costs, or both. Data mining software is one of a number of analytical
tools for analyzing data. It allows users to analyze data from many different
dimensions or angles, categorize it, and summarize the relationships
identified. Technically, data mining is the process of finding correlations or
patterns among dozens of fields in large relational databases.
Huge Data Base
Knowledge
Presentation
Collection
Processing
Output
Useful Knowledge
Useful Data
Reverse Data Mining = Data Base Discovery
For example: In case of the getting of foreign language knowledge,
we need some database for it. we have to discover the database.
And also it can be used for Humanoid Robot.
Huge Data Base
Knowledge
Presentation
Collection
Processing
Input
Target Knowledge
TRIZ (pronounced /triːz/) is a Russian acronym for "Teoriya Resheniya
Izobretatelskikh Zadatch" (Теория решения изобретательских задач),
Theory of solving inventive problems or Theory of inventive problem
solving. It was developed by Genrich Altshuller and his colleagues starting in
1946.
TRIZ is a methodology, tool set, knowledge base, and model-based
technology for generating innovative ideas and solutions for problem solving.
TRIZ provides tools and methods for use in problem formulation, system
analysis, failure analysis, and patterns of system evolution (both 'as-is' and
'could be'). TRIZ, in contrast to techniques such as brainstorming (which is
based on random idea generation), aims to create an algorithmic approach to
the invention of new systems, and the refinement of old systems.
Find out the Method of Invention
Essence of TRIZ:
For creative problem solving,
TRIZ provides a dialectic way of thinking,
to understand the problem as a system,
to image the ideal solution first, and
to solve contradictions.
TRIZ Technique:
1. Problem → Think for System
2. Imagination of Ideal Solution
3. Solution of Contradiction
7. Pattern recognition
In computer science, the imposition of identity on input data, such as speech,
images, or a stream of text, by the recognition and delineation of patterns it
contains and their relationships. Stages in pattern recognition may involve
measurement of the object to identify distinguishing attributes, extraction of
features for the defining attributes, and comparison with known patterns to
determine a match or mismatch. Pattern recognition has extensive
application in astronomy, medicine, robotics, and remote sensing by satellites.
Check
Needful Technologies:
-Neural Network
-Generic Algorithm
-Signal Processing
-AI
-Data Mining
-Fuzzy
Recognition Mechanism by Human
New Recognition System Development
8.Fuzzy
Fuzzy logic can be used to control household appliances such as washing
machines (which sense load size and detergent concentration and adjust their
wash cycles accordingly) and refrigerators.
A basic application might characterize subranges of a continuous variable. For
instance, a temperature measurement for anti-lock brakes might have several
separate membership functions defining particular temperature ranges
needed to control the brakes properly. Each function maps the same
temperature value to a truth value in the 0 to 1 range. These truth values can
then be used to determine how the brakes should be controlled.
Consider the linguistic variable, age. Suppoe it takes on values, young,
middle-aged, old. Zadeh represents the three values as three fuzzy sets, over
the value age_in_years. Each set has its unique possibility function. So a
person aged 45 would certainly be condiered middle aged. That person's
possibility value would be 1.0. On the other hand, a person aged 60, might
have a possibility of 0.4 of beiing middle aged, and a possibility of 0.3 of being
in the old set.
9. Expert System
An expert system, also known as a knowledge based system, is a computer
program that contains some of the subject-specific knowledge, and contains the
knowledge and analytical skills of one or more human experts. This class of
program was first developed by researchers in artificial intelligence during the
1960s and 1970s and applied commercially throughout the 1980s.
It can be used for action control, medical care, computer game, legal adviser
and information analysis for our experience.
It also is used fuzzy technique.
Knowledge
Base
Expert
Q
Inference
A
Expert System
Ontology,
Metadata
Intellectual Function of Human
Figure, Table,
Letter, Voice, Picture,
Compare, Calculation, Memory,
Learning, Meaning, Pattern Recognition,
Inference, Intuition, Imagination,
Judgment, Arts, Idea, Creation
Bottle Neck = Knowledge Acquisition
How, What, Who, Which ?
Why expert system does not success ?
= Need Perfect Function for Diagnosis or Judgment
= No Self-Learning System
= Not Easy Handling
= Not Friendly communication with human
10. Regression Analysis
In statistics, regression analysis examines the relation of a dependent variable
(response variable) to specified independent variables (explanatory variables).
The mathematical model of their relationship is the regression equation. The
dependent variable is modeled as a random variable because of uncertainty as to
its value, given only the value of each independent variable. A regression
equation contains estimates of one or more hypothesized regression parameters
("constants"). These estimates are constructed using data for the variables, such
as from a sample. The estimates measure the relationship between the
dependent variable and each of the independent variables. They also allow
estimating the value of the dependent variable for a given value of each
respective independent variable.
Uses of regression include curve fitting, prediction (including forecasting of timeseries data), modeling of causal relationships, and testing scientific hypotheses
about relationships between variables.
C= Consumption
E= Earnings
C
C= a1 + a2 x E
20
19
C= 3 + 0.4 x E
10
E=40, C=19
3
0
10
20
30
E
40
Multiple Regression Analysis :
The general purpose of multiple regression (the term was first used by
Pearson, 1908) is to learn more about the relationship between several
independent or predictor variables and a dependent or criterion variable. For
example, a real estate agent might record for each listing the size of the
house (in square feet), the number of bedrooms, the average income in the
respective neighborhood according to census data, and a subjective rating of
appeal of the house. Once this information has been compiled for various
houses it would be interesting to see whether and how these measures
relate to the price for which a house is sold. For example, one might learn
that the number of bedrooms is a better predictor of the price for which a
house sells in a particular neighborhood than how "pretty" the house is
(subjective rating). One may also detect "outliers," that is, houses that should
really sell for more, given their location and characteristics.
Personnel professionals customarily use multiple regression procedures to
determine equitable compensation. One can determine a number of factors
or dimensions such as "amount of responsibility" (Resp) or "number of
people to supervise" (No_Super) that one believes to contribute to the
value of a job. The personnel analyst then usually conducts a salary survey
among comparable companies in the market, recording the salaries and
respective characteristics (i.e., values on dimensions) for different positions.
This information can be used in a multiple regression analysis to build a
regression equation of the form:
Salary = 0.5Resp + 0.8No_Super
Once this so-called regression line has been determined, the analyst can
now easily construct a graph of the expected (predicted) salaries and the
actual salaries of job incumbents in his or her company. Thus, the analyst is
able to determine which position is underpaid (below the regression line) or
overpaid (above the regression line), or paid equitably.
In the social and natural sciences multiple regression procedures are very
widely used in research. In general, multiple regression allows the
researcher to ask (and hopefully answer) the general question "what is the
best predictor of ...". For example, educational researchers might want to
learn what are the best predictors of success in high-school. Psychologists
may want to determine which personality variable best predicts social
adjustment. Sociologists may want to find out which of the multiple social
indicators best predict whether or not a new immigrant group will adapt and
be absorbed into society.
V: No. of understand English Vocabulary
X: No. of studied English Vocabulary at Elementary School
Y: No. of studied English Vocabulary at Junior High School
a,b,c : No. of English Vocabulary
V = aX + bY + c
For Example: a= 5, b=3 , c=100
V = 5X + 3Y + 100
V
10,000
Y
6,000
3,000
2,000
1,000
2,000
X
100
0
1,000
2,000
3,000
11. Bioinformatics
Bioinformatics and computational biology involve the use of techniques
including applied mathematics, informatics, statistics, computer science,
artificial intelligence, chemistry, and biochemistry to solve biological problems
usually on the molecular level. Research in computational biology often
overlaps with systems biology. Major research efforts in the field include
sequence alignment, gene finding, genome assembly, protein structure
alignment, protein structure prediction, prediction of gene expression and
protein-protein interactions, and the modeling of evolution.
Living Body
Phenomenon
- Alignment information
of genetic map and amino-acid
- Architecture analysis of living body
- Interaction between protein
Living Body
= Genetic network
= Protein network
Information Flow
Phenomenon
Information
Analysis Method
Computer
12. Artificial Life
Artificial intelligence has traditionally used a top down approach while alife
generally works from the bottom up.
Artificial Life, (commonly Alife or alife) is a field of study and associated art
form which examine systems related to life, its processes and its evolution
through simulations using computer models, robotics, and biochemistry. There
are three main kinds of alife: soft from software, hard from hardware, and wet
from biochemistry approaches respectively. Artificial life imitates traditional
biology by trying to recreate biological phenomena. The term "Artificial Life" is
often used to specifically refer to soft alife.
Artificial life is one of the hottest research fields which is related to studies on
so-called complex systems. As its name suggests, artificial life is a study on
life which is realized using computers. In this study area, people are trying to
understand the origin of life, the various systems in living organisms, or the
mechanisms of evolution by means of making models for them and
implementing experimental systems as software. Some people even attempt to
create software that can itself be regarded as “life”.
What is life? Vast question that computer scientists have taken their turn
trying to answer by grabbing onto two key concepts: 1) life reproduces itself,
and 2) life evolves. In the 1970s, an artificial life computer program travelled
around the world: the ‘game of life.’ In it, ‘ cells ’ (actually, black dots on the
computer screen) appear, move, and die according to a set of simple rules.
From an initial population distributed on the screen at random, stable structures
emerge, some moving, some immobile, that resemble, in circumstance, what
might have been the first living organisms. Today, artificial life calls on
increasingly complex ideas, such as emergence, and touches more and more
the fields of robotics and bionics .
13.Tierra (Computer Simulation for Artificial Life)
Tierra is a computer simulation developed by ecologist Thomas S. Ray in
the early 1990s in which computer programs compete for central processing
unit (CPU) time and access to main memory. The computer programs in Tierra
are evolvable and can mutate, self-replicate and recombine. Tierra is a
frequently cited example of an artificial life model; in the metaphor of the
Tierra, the evolvable computer programs can be considered as digital
organisms which compete for energy (CPU time) and resources (main
memory).
Tierra Computer Simulation System
Tierra Start
Artificial
Living
Thing 1
Resister
Pointer
ALT1
Artificial
Living
Thing 2
Memory Area Reservation
Resister
Pointer
ALT2
ALT (i) start artificial living
in their memory.
Artificial
Living
Thing 3
Resister
Pointer
ALT3
Memory
Processor
Byte Code
Gene
Machine Language
Bit Change
Reproduction
Memory acquisition
Processor Time Acquisition
Execution