Slides011x - Vernonmath.com

Download Report

Transcript Slides011x - Vernonmath.com

The “Nothing is Obvious” Story
Imagine a young boy in the Amazon jungles.
This boy has always lived in the jungle without
any modern conveniences. He has never been
in a city; he has never seen a television nor seen a book.
Now imagine that for some unknown reason this young
boy travels to Colorado in the Winter time. The little boy
stands in a yard somewhere and watches the snow with
bewilderment. He is astonished; he does not understand
what is falling from the sky.
Another little boy, about the same age, from Colorado,
looks at the boy's behavior. The Colorado boy is
confused, why is the boy acting so odd?
Obviously it is snowing, so what is the big deal?
Corn Flakes & Iced Tea
Most Americans consider it
"obvious" that cold milk is poured
on corn flakes. However, in
Europe, everybody knows you put
warm milk on your cereal.
Most Europeans consider it
"obvious" that Tea is to be served
warm, preferably hot. They are
completely baffled when Texans
actually put ICE in their Tea.
The Exposure Equation
Bewilderment + Exposure = Obvious
Exposure in
Extracurricular Activities
• Drill team performance
• Half-time band show
• Football Team blocking
• Basketball free throws
• Baseball batting
The Curious
Exposure Discrepancy
Students recognize that only continuous
practice will result in a good showing at a brief
performance or brief competition
Many of the same students barely read or
practice a topic once for an academic subject.
It appears that preparation for a known, short
performance requires practice, but preparation
for life receives only minimal effort from many
students.
Computer Fundamentals
Getting started with computer science is none too easy.
The course that you are taking assumes that this is your
first formal computer science course.
Furthermore, it is also assumed that you have no
knowledge of programming.
If you do know some programming, fine, but it is not any
kind of a prerequisite.
This means that we should start at
Three Ways Where
Computers Beat People
• Computers are faster.
• Computers are more accurate.
• Computers do not forget.
Morse
Code
Electronic Memory
off
on
off
off
on
off
off
off
0
1
0
0
1
0
0
0
Decimal (Base-10) Number System
The number system that we use is called the decimal
number system or base-10.
It is called “base-10” because it has 10 digits (0 – 9).
Rumor has it that people developed a base-10 system,
because of our ten fingers.
Consider the base-10 number 2,345,678
106
105
1,000,000 100,000
2
3
104
103
102
101
100
10,000
1,000
100
10
1
4
5
6
7
8
0-31 in Bases 10 & 2
base-10
base-2
base-10
base-2
0
0
16
10000
1
1
17
10001
2
10
18
10010
3
11
19
10011
4
100
20
10100
5
101
21
10101
6
110
22
10110
7
111
23
10111
8
1000
24
11000
9
1001
25
11001
10
1010
26
11010
11
1011
27
11011
12
1100
28
11100
13
1101
29
11101
14
1110
30
11110
15
1111
31
11111
Binary (Base-2) Number System
The number system used by computers is the binary
number system or base-2.
Only the digits 0 and 1 are used.
Remember that modern computers use electricity,
which is either on or off. 1 means on. 0 means off.
Consider the base-2 number 01000001
27
26
25
24
23
22
21
20
128
64
32
16
8
4
2
1
0
1
0
0
0
0
0
1
Can you tell that this is equal to the base-10 number 65?
Three Combinations
of 8 Light Bulbs
01000001 (base-2) =
0 1 0 0 0 0 0 1
0 1 0 0 0 0 1 0
65 (base-10) or char A
01000010 (base-2) =
66 (base-10) or char B
01000011 (base-2) =
0 1 0 0 0 0 1 1
67 (base-10) or char C
Bits, Bytes & Codes
• Bit is a binary digit that is either 0 (off) or 1 (on)
• 1 Byte = 8 bits
• 1 Nibble = 4 bits (½ a byte)
• 1 Byte has 28 or 256 different numerical combinations.
• 2 Bytes has 216 or 65,536 different numerical combinations.
• ASCII uses one byte to store one character.
• Unicode uses two bytes to store one character.
Motherboard & Computers Chips
motherboard
The main board with all the primary computer
components. Has several computer chips attached:
Read Only Memory (ROM)
This chip stores permanent information for the
computer.
Random Access Memory (RAM)
This chip stores temporary information for the
computer.
Central Processing Unit (CPU)
This chip is the “brains” of the computer.
Measuring Memory
KB
MB
GB
TB
PB
EB
ZB
YB
Kilo Byte
1 thousand bytes
1,000
Mega Byte 1 million bytes
1,000,000
Giga Byte
1 billion bytes
1,000,000,000
Tera Byte
1 trillion bytes
1,000,000,000,000
Peta Byte
1 thousand terabytes
1,000,000,000,000,000
Exa Byte
1 million terabytes
1,000,000,000,000,000,000
Zetta Byte 1 billion terabytes
1,000,000,000,000,000,000,000
Yotta Byte
1,000,000,000,000,000,000,000,000
1 trillion terabytes
Note: Technically, a kilobyte is exactly 210 or 1024 bytes.
Secondary Storage Devices
Since RAM is lost when the computer is
turned off, files must be saved to some
secondary storage device for later use.
Hardware
Hardware refers to physical pieces of computer equipment.
This included the main computer system unit, as well as all
of the peripherals (things that plug into the computer.)
Software
Software provides instructions to a computer.
The most important aspect of this course is to learn how to give
correct and logical instructions to a computer with the help of a
programming language.
Software falls into two categories:
• System Software
• Application Software.
The major Operating Systems are Windows, UNIX, Linux and the
MAC OS. These are all examples of System Software.
Applications Software runs an application on a computer.
Examples of Application Software are Word, Excel, PowerPoint,
Video Games, and the very programs that you will write in this
course.
The First Era
Counting Tools
A long time ago some caveman must
have realized that counting on fingers
and toes was very limiting.
They started making marks on rocks,
carving notches in bones and tying
knots in rope.
Eventually, mankind found more
practical ways to not only keep track
of large numbers, but also to perform
mathematical calculations with them.
The Abacus
3000 B.C.
The Abacus was originally invented in the Middle
Eastern area. This rather amazing computing device
is still very much used in many Asian countries today.
Napier's "Bones"
1617 A.D.
John Napier used some bones marked with special scales
to simplify arithmetic by using addition for multiplication.
The Slide Rule
1622
William Oughtred created the slide rule.
This device allows sophisticated mathematical
calculations and was used for centuries until it was
replaced by the scientific calculator in the 1970s.
The Second Era
Gear Driven Devices
More complicated tasks required
more complicated devices. These
devices would have rotating gears.
Since they did not use electricity,
they would require some form of
manual cranking in order to
function.
One problem with devices that
have moving parts is that they
wear and break.
Numerical Calculating
Machine 1642
Blaise Pascal built the Pascaline. This was the first
numerical calculating machine. The inner workings of this
device are similar to the tumbler odometers found in old
cars. It could perform addition and subtraction.
A version of this device was still being used in the 1970s!
Jacquard's Loom
1805
Joseph Jacquard invented a special
loom that would accept special
flexible cards that are punched with
information in such a manner that it
is possible to program how cloth
will be weaved. It is one of the first
examples of programming.
Analytical Engine
1833
Charles Babbage invented a machine that can read
instructions from a sequence of punched cards. This
became the first general purpose computing machine.
Programming
1842
Ada Byron, the Countess of Lovelace, was Charles
Babbage’s assistant. She knew his device required
instructions – what we would today call programs or
software. So, over 170 years ago, she started designing
computer programs. In so doing she developed certain
programming ideas and techniques that are still used in
modern programming languages today.
The Third Era
Electro-Mechanical Devices
The term electro-mechanical device means the
device uses electricity, but still has moving parts.
These devices are not yet “fully electronic”.
Since they do use electricity, the manual cranking
is no longer needed. Since they still have moving
parts, they still break down easily.
Tabulating Machine
1889
Herman Hollerith invented a tabulating machine that
records statistics for the U.S. Bureau of Census.
The 1880 census took 8 years to tabulate by hand.
With this punch-card tabulating machine, the 1890 census
was tabulated in just 1 year.
Tabulating Machine Company
1896, 1911, 1924
In 1896, Herman Hollerith founded the
Tabulating Machine Company. In 1911, this
firm merged with three others to form the
Computing Tabulating Recording Company.
In 1924, the company was renamed
International Business Machines Corporation.
Differential Analyzer 1931
Harold Locke Hazen and
Vannevar Bush, from MIT,
built a large scale computing
machine capable of solving
differential equations.
In 1934 a model was made at
Manchester University by
Douglas Hartree and Arthur
Porter out of an erector set.
It was less expensive, but still
"accurate enough for the
solution of many scientific
problems".
Z3 1941
Konrad Zuse builds an electro-mechanical computer
capable of automatic computations in Germany during
World War II.
It was the first functional,
programmable, fully
automatic digital computer.
The Z3 was destroyed in
1943 during the Allied
bombing of Berlin.
Some credit Konrad Zuse as the “inventor of the computer”
Mark-I
1944
This electro-mechanical calculator was 51 feet long and 8
feet tall. It was the first machine that could execute long
computations automatically. These computations could
involve several numbers, each up to 23 digits in length.
Grace Hopper
Grace Hopper, then a Navy Lieutenant, was one of the
first programmers of the Mark-I. She would make so
many contributions to the world of computer science
that the United States Congress allowed her to stay in
the Navy past mandatory retirement age. She finally
retired as an Admiral in 1986 at the age of 79.
Mark-II 1947
On September 9,
1942 the Mark-II
stopped working.
A technician
found and
removed moth
from one of its
relays. This was
the first literal
computer bug.
The actual moth
is currently on
display at the San
Diego Computer
Museum.
Shortly after this happened, Grace Hopper
made the term “debugging” popular.
The Fourth Era
Fully Electronic Computers
with Vacuum Tubes
This is often referred to as
“The First Generation of Computers”.
Fully electronic computers do not
rely on moving parts. This makes
them faster and more reliable.
The vacuum tubes used at the time
still had their share of drawbacks.
• They were big and bulky.
• They would get hot and burn out.
ABC
The very first
electronic digital
computer, the ABC,
was invented by
John Atanasoff and
Clifford Berry at Iowa
State University.
This was not a
“general purpose
computer”, nor was
it programmable.
It was specifically
designed to solve
systems of linear
equations.
1940
Colossus
This was the first electronic
digital computer that was
somewhat programmable. It
was designed by an engineer
named Tommy Flowers.
A total of 10 Colossus
computers were made.
They were used by code
breakers in England to help
decrypt the secret coded
messages of the Germans
during World War II.
1943
ENIAC 1946
The ENIAC (Electronic Numerical
Integrator and Computer) was the
first electronic general purpose
computer.
It was invented by John Mauchly
and J. Presper Eckert. This
computer is twice the size of the
Mark-I, contained 17,468 vacuum
tubes, and is programmed by
rewiring the machine.
ENIAC 1946
The ENIAC was capable of performing 385
multiplication operations per second.
In 1949, it was the first computer used to calculate PI.
The press called it “The Giant Brain.”
EDVAC
The EDVAC (Electronic
Discrete Variable
Automatic Computer)
was the successor to
the ENIAC.
The main improvement
was that it was a Stored
Program Computer.
This meant is could
store a program in
electronic memory.
(about 5½ kilobytes).
1949
UNIVAC I
The UNIVAC I (UNIVersal Automatic
Computer) was the world’s first
commercially available computer.
The computer became famous when
it correctly predicted the results of
the 1952 presidential election.
1951
The Fifth Era
Computers with Transistors
or Integrated Circuits
The invention of the transistor began
“The Second Generation of Computers”.
The University of Manchester made
the first transistor computer in 1953.
Transistors have certain key
advantages over vacuum tubes.
• They are much smaller.
• They do not get hot and burn out.
Integrated Circuit 1958
Jack Kilby, of Texas Instruments, in Richardson, Texas, developed
the first integrated circuit which has multiple transistors on a tiny
thin piece of metal, called a chip. Jack Kilby used germanium.
Six months later Robert Noyce came up with his own idea for an
improved integrated circuit which uses silicon.
He is now known as “The Mayor of Silicon Valley”.
Both gentlemen are credited
as co-inventors of the
integrated circuit.
This began “The Third
Generation of Computers”.
As technology improved,
we developed the ability
to put billions of transistors
on a tiny microchip.
Video Games
1958/1962
The first video game was called Tennis for Two.
It was created by William Higinbotham and played on a
Brookhaven National Laboratory oscilloscope.
Since this game did not use an
actual computer monitor, some
give credit for the first video game
to SpaceWar! written by
Stephen Russell at MIT in 1962.
IBM System/360
1964
IBM creates a family of computers which cover a complete range of
applications. These computers work for both the math community and the
business community. All computers in this series are compatible, but sell
for different prices based on their speed. System/360 is responsible for
establishing a number of industry standards including the 8 bit byte.
Apple II
1977
The Apple Computer
Company was created
and introduced the
Apple II Personal
Computer.
It became the first
commercially
successful personal
computer.
VisiCalc
1979
Dan Bricklin created VisiCalc, a spreadsheet program,
which became the first wide spread software to be sold.
WordStar
1979
MicroPro releases WordStar, which became the most
popular word processing program.
IBM PC
1981
IBM's entry into the
personal computer
market gave the
personal computer a
serious image as a
true business
computer and not
some sophisticated
electronic game
playing machine.
MS-DOS
1981
Microsoft, an
unknown little
company run by Bill
Gates, agreed to
produce the
operating system
for the IBM
Personal Computer
and became a
company larger
than IBM.
Compaq Portable 1982
The Compaq Portable is known for two things. It was the
first portable computer. By today’s standards it was
nothing like a modern laptop. The 28 pound computer
was the size of a small suitcase. Compaq was also the
first computer to be 100% compatible with an IBM PC.
The Macintosh
1984
The "Mac" was the first
commercially successful
computer with the
mouse/windows technology.
The mouse technology was
already developed earlier by
Xerox Corporation.
Windows 1.0
1985
The first Windows operating system was actually an
Operating Environment which acted as a front while
MS-DOS was running in the background.
Windows 95
1995
Microsoft introduces Windows 95, which uses an
operating system similar to the Macintosh computer.
Windows Versions 1985-2015
Version
Home Editions
Professional / Power User Editions
Windows 1.0 – 3.1
Windows NT 3.1
Windows 95
Windows NT 3.51
1
2
3
4
Windows 98
Windows NT 4.0
Windows Millennium
Windows 2000
5
Windows XP Home Edition
Windows Home Server
Windows XP Professional Edition
Windows Server 2003
Windows Server 2003 R2
Windows Vista
Windows Server 2008
Windows 7
Windows Home Server 2011
Windows Server 2008 R2
Windows 8, 8.1
Windows Phone 8
Windows RT
Windows Server 2012
Windows Server 2012 R2
6
7
8
9
Windows 10?
Tianhe-2 Supercomputer 2013
As of June 2013, China’s Tianhe-2 is the fastest computer
in the world. It can perform 33,860,000,000,000,000
floating point operations in 1 second.
That is 88 TRILLION times as fast as the ENIAC!
Program Definition
A program is a sequence of instructions
that makes a computer perform a desired
task.
A programmer is a person who writes a
program for a computer.
Programming in Machine Code
Programming in Machine Language a.k.a. Machine
Code means you are directly manipulating the 1s and
0s of the computer’s binary language.
In some cases, this means
you are manipulating the
wires of the machine.
In other cases, you are flipping switches on and off.
Even if you had the ability to “type” the 1s and 0s,
machine language would still be incredibly tedious.
Assembly Language & the EDSAC 1949
Assembly Language was first
introduced by the British with
the EDSAC (Electronic Delay
Storage Automatic Computer).
EDSAC had an assembler
called Initial Orders which
used single-letter mnemonic
symbols to represent different
series of bits.
While still tedious, entering a
program took less time and
fewer errors were made.
“Amazing Grace”
In the 1940s, Grace Hopper did not like the way we were programming
computers. There had to be a better way. The whole reason computers
were invented in the first place was to do tedious. It should be possible to
program a computer using English words instead of 1s and 0s.
Grace Hopper wrote the first compiler (a type of translator) in 1952 for the
language A-0. This paved the way for the other languages that followed.
Many of these were also created in part or in whole by Grace Hopper.
Her immeasurable contributions to computer science have earned her the
nickname “Amazing Grace”. The Cray XE6 Hopper supercomputer and the
USS Hopper Navy destroyer are also named after her.
Types of Languages
Low-Level Languages
Languages that function at, or very close to 1s and 0s.
Powerful, but very difficult.
Examples: Machine Language, Assembly Language
High-Level Languages
Languages that use English-like words as instructions.
Easier, but less powerful.
Examples: BASIC, Pascal, COBOL, FORTRAN, PL/I, Java
Very High-Level Languages
Languages that use clickable pictures as instructions.
Example: Lego Mindstorms NXT
Computer Translators
A translator (compiler or interpreter)
translates a high-level language into
low-level machine code.
A compiler translates the entire program
into an executable file before execution.
An interpreter translates one program
statement at a time during execution.
FORTRAN
1957
The first successful high-level programming
language.
FORTRAN stands for FORmula TRANslation
Developed by a team of IBM programmers
for mathematicians, scientists and
engineers.
While good for number crunching,
FORTRAN could not handle the record
processing required for the business world.
LISP
1958
LISP (LISt Processing) was designed by John McCarthy at MIT.
It is known for being one of the languages specifically designed
to help develop artificial intelligence.
LISP introduced several important programming concepts
which are used in modern programming languages today.
COBOL
1959
Created (largely by Grace Hopper) for the
business community and the armed forces.
COBOL stands for
COmmon Business Oriented Language.
COBOL became extremely successful when the
Department of Defense adopted COBOL as its
official programming language.
FORTRAN vs. COBOL
1960s
In the early 1960s computer design was not yet
standardized and was strongly influenced by
programmers’ languages of choice.
FORTRAN programmers wanted computers that were
suited for number crunching.
COBOL programmers wanted computers that were
suited for record handling.
Companies like IBM would have different models for
“FORTRAN programmers” and “COBOL Programmers”.
In 1964, the IBM System/360 family of computers
standardized hardware and was suitable for both.
PL/I
1964
PL/I stands for Programming Language 1.
After IBM standardized hardware with System/360, they
set out to standardize software as well by creating PL/I.
This language combined all of the number crunching
features of FORTRAN with all of the record handing
features of COBOL.
The intention was that this language would be
“everything for everyone”.
The reality was that the FORTRAN programmers did not
like the COBOL features, the COBOL programmers did
not like the FORTRAN features, and new programmers
found the language too complex to learn.
BASIC
1964
Tom Kurtz and John Kemeny created BASIC (Beginners Allpurpose Symbolic Instruction Code) at Dartmouth College.
Their intention was that a simple language would give non-math
and non-science majors the ability to use computers.
The use of BASIC became
widespread when personal
computers hit the market.
The first was the Altair in 1976.
BASIC required little memory, and it was the only language that
could initially be handled by the first personal computers.
The Altair was shipped with Altair BASIC a.k.a. Microsoft BASIC.
Pascal
1969
College professors did not like BASIC because it did not teach
proper programming structure.
Instead, it taught quick-and-dirty programming.
Niklaus Wirth decided to create a language specifically for the
purpose of teaching programming.
He named this new language Pascal after Blaise Pascal.
Unlike PL/I, Pascal is a very lean language.
It has just enough of both the math features of FORTRAN and
the record handling features of COBOL to be functional.
In 1983, the College Board adopted Pascal as the first official
language for the AP® Computer Science Exam.
C
1972
In 1966, BCPL (Basic Combined Programming Language)
was designed at the University of Cambridge by Martin
Richards.
This language was originally intended for writing compilers.
In 1969, Ken Thompson, from AT&T Bell Labs, created a
slimmed down version of BCPL which was simply referred
to as B.
In 1972, an improved version of B was released.
This was called C.
In 1973, C was used to rewrite the kernel for the UNIX
operating system.
C++
1983
As computer programs grew more complex a new,
more powerful, and more reliable type of programming
was needed. This lead to the development of
Object Oriented Programming (OOP).
Bjarne Stroustrup wanted to create a new language
that uses OOP, but did not want programmers to have
to learn a new language from scratch. He took the
existing, very popular language C and added OOP to it.
This new language became C++.
In 1997, C++ replaced Pascal as the official language
for the AP® Computer Science Exam.
Medium-Level Languages
C and C++ are sometimes considered to
be medium-level languages.
This is because they have the English
commands of a high-level language as
well as the power of a low-level language.
This made C, and later C++, very popular
with professional programmers.
Java
1995
Released in 1995 by Sun Microsystems.
Java is a Platform Independent Language.
Platform Independence means that the language does
not cause problems as programs are transported
between different hardware and software platforms.
Unlike C++, were OOP is optional, Java requires you to
use OOP which caused many universities to adopt it.
For this reason, in 2003, Java replaced C++ as the
official language for the AP® Computer Science Exam.
In 2010, Oracle acquired Sun Microsystems.
Java has continued to improve in the same manner as
when Sun Microsystems owned the company.
Lego Mindstorms NXT
2006
A new kind of programming has
come about that is very high-level.
In this style of programming, the
programmers can click on different
blocks. Each block performs a different task.
By creating a sequence of these blocks, you
can program a computer.
In 1998, the Lego Corporation created their first point-andclick language for use with their Lego Mindstorms robots.
In 2006, they released their next language, and decided to call
it NXT. In 2009, NXT 2.1 was released.
What We Use
During this school year, we will be learning
Java. At some schools one or more Lego
NXT labs are sometimes done before a
chapter to introduce a topic.
SneakerNet
Early personal computers were not networked at all.
Every computer was a stand-alone computer.
Some computers were hooked up to printers and many
others were not.
If you needed to print something, and you were not
directly connected to a printer, you saved your work to a
floppy disk, put on your sneakers, and walked to the
printing computer.
Sharing files was done in the same way.
Peer-To-Peer Networks
The first practical networks for personal computers were peer-to-peer
networks.
These are small groups of computers with a common purpose all
connected to each other.
These types of networks were frequently called Local Area Networks
or LANs.
Initially, the networks were true peer-to-peer networks.
This means that every computer on the network was equal.
Client-Server Networks
A server is a special computer that is connected to the
LAN for one or more purposes.
It services the other computers in the network which are
called clients.
Servers can be used for printing, logon authentications,
permanent data storage and communication.
The Department of Defense
The Internet has existed since the 1960s and has its origins in the
"Cold War." During the Cold War there was a major concern about
the country being paralyzed by a direct nuclear hit on the Pentagon.
A means of communication had
to be created that was capable
to keep working regardless of
damage created anywhere.
This was the birth of the Internet.
The Internet has no central
location where all the control
computers are located.
Any part of the Internet can be
damaged and all information will
then travel around the damaged area.
The Modern Internet
Normally, businesses and schools have a series of LANs that all
connect into a large network called an Intranet.
An Intranet behaves like the Internet on a local business level.
This promotes security, speed and saves cost.
Now the moment a school, a business, your home, wants to be
connected to the outside world and giant world-wide network
known as the Internet, you have access to millions of lines of
telecommunications.
This will cost money and every person, every school, every
business, who wants this access needs to use an Internet
Service Provider or ISP.
You pay a monthly fee to the ISP for the Internet connection.
The amount of money you pay depends on the speed of your
Internet connection.