Store to memory - Computer and Information Science

Download Report

Transcript Store to memory - Computer and Information Science

CIS212, Spring 2009
Introduction to Computer
Science III
Instructors: Dejing Dou, Michal Young
Office hours (Dou): Fridays 11am-12pm
Week 5: April 27 – May 1, 2009
www.cs.uoregon.edu/classes/08S/cis212
1
Review for Friday
 The components of a computer system
 Von Neumann architecture
 Memory
 Input/Output, Storage
Figure 5.2
Components of the Von Neumann Architecture
Figure 5.7
Overall RAM Organization
Figure 5.8
Overall Organization of a Typical Disk
Input/Output and Mass Storage
(continued)
 I/O controller
 Intermediary between central processor and I/O devices
 Processor sends request and data, then goes on with its
work
 I/O controller has a small amount of memory called I/O
buffer
 I/O controller interrupts processor when request is
complete
Figure 5.9
Organization of an I/O Controller
7
The Arithmetic/Logic Unit (ALU)
 Actual computations are performed
 Primitive operation circuits
 Arithmetic (ADD)
 Comparison (CE)
 Logic (AND)
 Data inputs and results stored in registers
 A typical ALU has 16, 32, or 64 registers
 Multiplexor selects desired output
The
ALU
and
Registers
Result
Registers
Left
Right
R0
R1
..
.
R15
ALU
L+R
L=R
L-R
LR
L/R
L<R
L>R
L AND R
The Arithmetic/Logic Unit
(continued)
 ALU process
 Values for operations copied into ALU’s input
register locations
 All circuits compute results for those inputs
 Multiplexor selects the one desired result from
all values
 Result value copied to desired result register
Figure 5.12
Using a Multiplexor Circuit to Select the Proper ALU Result
11
The Control Unit (CU)
 Manages stored program execution
 Von Neumann Architecture: Program stored in
memory
 Task
 Fetch from memory the next instruction to be
executed
 Decode it: Determine what is to be done
 Execute it: Issue appropriate command to ALU,
memory, and I/O controllers
Machine Language Instructions
 Can be decoded and executed by control unit
 Parts of instructions
 Operation code (op code)

Unique unsigned-integer code assigned to each
machine language operation
 Address fields

Memory addresses of the values on which operation
will work
ADD
00001001
X,
Y
0000000001100011 0000000001100100
Figure 5.14
Typical Machine Language Instruction Format
Machine Language Instructions
(continued)
 Operations of machine language, different processors
(Mac G5 vs. Intel Pentium 4) use different set of
instruction set.
 Type I: Data transfer operations

Move values to and from memory and registers
LOAD X (load register R with content of memory cell X)
STORE X (store the content of register R into memory cell
X)
MOVE X Y (copy the content of memory cell X into cell Y)
Machine Language Instructions
(continued)
 Operations of machine language
 Type II: Arithmetic/logic operations

Perform ALU operations that produce numeric values
ADD X, Y
(CON(Y) = CON(X) + CON(Y))
ADD X, Y, Z
(CON(Z) = CON(X) + CON(Y))
ADD X
(R = CON(X) + R)
Others: SUBSTRACT, MULTIPLY, DIVIDE, AND, OR…
Machine Language Instructions
(continued)
 Operations of machine language
 Type III: Compare operations

Compare two values and set an indicator on the basis
of the results of the compare; set register bits
COMPARE X, Y
CON(X) > CON(Y)
GT = 1, EQ = 0, LT = 0
CON(X) = CON(Y)
GT = 0, EQ = 1, LT = 0
where GT, EQ and LT are special bits (condition codes)
in a status register (condition register)
Machine Language Instructions
(continued)
 Operations of machine language
 Type IV: Branch operations

Jump to a new memory address to continue
processing
JUMP X (take the next instruction unconditionally
from memory cell X)
JUMPGT X (if GT indicator is a 1, take the next
instruction from memory cell X. Otherwise, take the
next instruction from the next sequential location)
HALT
(stop program execution.)
Review for Monday
 Machine Language and Putting all the pieces
together—the Von Neumann architecture
 ALU
 Machine Language
The
ALU
and
Registers
Result
Registers
Left
Right
R0
R1
..
.
R15
ALU
L+R
L=R
L-R
LR
L/R
L<R
L>R
L AND R
00001001
ADD
0000000001100011 0000000001100100
X,
Y
Figure 5.14
Typical Machine Language Instruction Format
Machine Language Instructions
 Operations of machine language, different processors
(Mac G5 vs. Intel Pentium 4) use different set of
instruction set.
 Type I: Data transfer: LOAD X
STORE X MOVE X Y
 Type II: Arithmetic/logic: ADD X, Y , ADD X, Y, Z ADD X
 Type III: Compare: COMPARE X, Y, e.g.,
CON(X) > CON(Y)
GT = 1, EQ = 0, LT = 0
 Type IV: Branch: JUMP X, JUMPGT X , HALT
Example in memory:
Address
contents
100
Value of X
101
Value of Y
102
Value of Z
Instructions
X=Y+Z
Address
contents
50
LOAD 101
51
ADD 102
52
STORE 100
Control Unit Registers and
Circuits
 Parts of control unit
 Links to other subsystems
 Instruction decoder circuit
 Two special registers

Program counter (PC)


Stores the memory address of the next instruction to be
executed
Instruction register (IR)

Stores the code for the current instruction
Figure 5.16
Organization of the Control Unit Registers and Circuits
E.g. LOAD X means:
1) IRaddr => MAR 2) FETCH 3) MDR => R
25
Putting All the Pieces
Together—the Von Neumann
Architecture
 Subsystems connected by a bus
 Bus: Wires that permit data transfer among them
 Computer repeats fetch-decode-execute cycle
indefinitely
Figure 5.18
The Organization
of a Von Neumann
Computer
Figure 5.19 list instruction set
Non-Von Neumann
Architectures
 Physical limitations on speed of Von Neumann
computers (light speed: ~30km/second, 500-3000
MIPS)
 E.g. Displaying animated images without flicker, it must
generate 30 new frames per second. If an image is
15001500 pixels. It means 3015001500 pixel
computations needs to be completed every second. If
each pixel computation needs about 100 instructions.
We need 6.75 billion instructions per second. Current
computer can do 0.5-3 billion instructions per second.
Non-Von Neumann
Architectures
 Non-Von Neumann architectures explored to
bypass these limitations
 Parallel computing architectures can provide
improvements; multiple operations occur at the
same time
Non-Von Neumann
Architectures (continued)
 SIMD architecture
 Single instruction/multiple data
 Multiple processors running in parallel
 All processors execute same operation at one time
 Each processor operates on its own data
 Suitable for vector operations
Figure 5.21
A SIMD Parallel Processing System
31
Non-Von Neumann
Architectures (continued)
 MIMD architecture (cluster computing)
 Multiple instruction/multiple data
 Multiple processors running in parallel
 Each processor performs its own operations on its own
data
 Processors communicate with each other
 One implementation without a single administrative
organization is called grid computing
Figure 5.22
Model of MIMD Parallel Processing
Chapter 6: System Software
and Virtual Machines
 System software
 Assemblers and assembly language
 Operating systems
Introduction
 Von Neumann computer
 “Naked machine”
 Hardware without any helpful user-oriented
features
 Extremely difficult for a human to work with
Introduction
 An interface between the user and the hardware is
needed to make a Von Neumann computer usable
 Hide details of the underlying hardware from
the user
 Present information in a way that does not
require in-depth knowledge of the internal
structure of the system
Introduction (continued)
 Tasks of the interface (continued)
 Allow easy user access to the available resources
 Prevent accidental or intentional damage to
hardware, programs, and data
System Software: The Virtual
Machine
 System software
 Acts as an intermediary between users and
hardware
 Creates a virtual environment for the user that
hides the actual computer architecture
 Virtual machine (or virtual environment)
 Set of services and resources created by the
system software and seen by the user
Figure 6.1
The Role of System Software
Types of System Software
 System software is a collection of many different
programs
 Operating system (most important system
software)
 Controls the overall operation of the computer
 Communicates with the user
 Determines what the user wants
 Activates system programs, applications packages, or
user programs to carry out user requests
Figure 6.2
Types of System Software
Types of System Software
(continued)
 User interface
 Graphical user interface (GUI) provides
graphical control of the capabilities and services
of the computer
 Language services
 Assemblers, compilers, and interpreters
 Allow you to write programs in a high-level,
user-oriented language, and then execute them
Types of System Software
(continued)
 Memory managers
 Allocate and retrieve memory space
 Information managers
 Handle the organization, storage, and retrieval
of information on mass storage devices
 I/O systems
 Allow the use of different types of input and
output devices
Types of System Software
(continued)
 Scheduler
 Keeps a list of programs ready to run and selects
the one that will execute next
 Utilities
 Collections of library routines that provide
services either to user or other system routines,
e.g., text editors, drawing programs, control
panel
Shortcomings of Machine
Language
 Machine language
 Uses binary
 Allows only numeric memory addresses
 Difficult to change
 Difficult to create data
Review for Wednesday’s Lecture
 System Software and Virtual Machine
 Von Neumann architecture is not user oriented
 Machine language and Assembly Languages (to be
continued)
00001001
ADD
0000000001100011 0000000001100100
X,
Y
Figure 5.14
Typical Machine Language Instruction Format
Shortcomings of Machine
Language
 Machine language
 Uses binary
 Allows only numeric memory addresses
 Difficult to change
 Difficult to create data
Assembly Language
 Assembly languages
 Designed to overcome shortcomings of machine
languages
 Create a more productive, user-oriented environment
 Earlier termed second-generation languages
 Now viewed as low-level programming languages
Examples of Assembly
Language Code
:
LOAD
B
ADD
C
SUBTRACT SEVEN
STORE
A
:
A=B+C-7
Figure 6.3
The Continuum of Programming Languages
51
Assembly Language (continued)
 Source program
 An assembly language program
 Object program
 A machine language program
 Assembler
 Translates a source program into a corresponding object
program
Assembly Language (continued)
 Advantages of writing in assembly language rather
than machine language
 Use of symbolic operation codes rather than numeric
(binary) ones
 Use of symbolic memory addresses rather than numeric
(binary) ones
 Pseudo-operations that provide useful user-oriented
services such as data generation
Figure 6.6
Structure of a Typical Assembly Language Program
54
Figure 6.4
The Translation/Loading/Execution Process
Advantages of Assembly
Language
 Use of symbolic operation codes rather than numeric
(binary) ones
 Use of symbolic memory addresses rather than
numeric (binary) ones
 Pseudo-operations that provide useful user-oriented
services such as data generation
Figure 6.6
Structure of a Typical Assembly Language Program
57
Examples of Assembly
Language Code
 Algorithmic operations
Set the value of I to 1 .
:
Add 1 to the value of I .
Examples of Assembly
Language Code (continued)
 Assembly language translation
LOAD
STORE
:
INCREMENT
:
I:
.DATA
ONE: .DATA
ONE --Put a 1 into register R.
I
--Store the constant 1 into i.
I
--Add 1 to memory location i.
0
1
--The index value. Initially it is 0.
--The constant 1.
Examples of Assembly
Language Code (continued)
 Arithmetic expression
A=B+C–7
(Assume that B and C have already been assigned values)
Examples of Assembly
Language Code (continued)
 Assembly language translation
LOAD
ADD
SUBTRACT
STORE
:
HALT
A:
.DATA
B:
.DATA
C:
.DATA
SEVEN: .DATA
B
C
SEVEN
A
--Put the value B into register R.
--R now holds the sum (B + C).
--R now holds the expression (B + C - 7).
--Store the result into A.
--These data should be placed after the
0
0
0
7
--The constant 7.
Translation and Loading
 Before a source program can be run, an assembler
and a loader must be invoked
 Assembler
 Translates a symbolic assembly language program into
machine language
 Loader
 Reads instructions from the object file (e.g., .exe files)
and stores them into memory for execution
Translation and Loading
(continued)
 Assembler tasks (detail later)
 Convert symbolic op codes to binary
 Convert symbolic addresses to binary
 Perform assembler services requested by the pseudo-ops
 Put translated instructions into a file for future use
Duck Machine
 A simple single register machine
 16 bit words
 4 bits instruction, 12 bits address
 16 bits of integer data, two’s-complement form
 One register R
 Program counter PC and instruction register IR
 Condition code CC (values GT, EQ, LT)
64
Duck Machine Instructions
 Each instruction (except halt) has one
memory address as its operand
Opcode
Instruction
Opcode
Instruction
0
Load from memory
1000
Jump to address
1
Store to memory
1001
Jump if GT
10
Clear (zero) memory
1010
Jump if EQ
11
Add memory to register
1011
Jump if LT
100
Increment memory
1100
Jump if not EQ
101
Subtract memory from register
1101
Input to address
110
Decrement memory
1110
Output from address
111
Compare memory to register
1111
Halt
65
Duck Machine Assembly Language
 Mnemonics for binary opcodes, symbolic
addresses
 Any instruction may be labeled by symbolic
address
ASM
Instruction
ASM
Instruction
LOAD
Load from memory
JUMP
Jump to address
STORE
Store to memory
JUMPGT
Jump if GT
CLEAR
Clear memory
JUMPEQ
Jump if EQ
ADD
Add register to memory
JUMPLT
Jump if LT
INC
Increment memory
JUMPNEQ
Jump if not EQ
SUB
Subtract register from memory
IN
Input to address
DEC
Decrement memory
OUT
Output from address
COMPARE
Compare memory to register
HALT
Halt
66
DM Assembly Example
Print maximum of two input numbers
.BEGIN
IN
X
IN
Y
LOAD
X
COMPARE
Y
-- set CC (Y
vs. X)
JUMPGT
CC
XGT:
YGT:
LAST:
X:
Y:
.END
OUT
JUMP
OUT
HALT
.DATA
.DATA
67
YGT
X
LAST
Y
0
0
-- test
Example DM Object
Programs
1101000000010000
Address
0000
0001
0010
0011
0100
0101
0110
0111
1000
IN
X
1101000000010001
Machine Language
Instruction meaning
IN Y 0000000000010001
LOAD Y
0111000000010000
COMPARE X
1001000000000111
JUMPGT
XGT
1110000000010001
OUT Y
1000000000001000 XGT: JUMP
HALT
1110000000010000
68
OUT X
Another Example
Compute sum
from 1 to N by
Iterations
.BEGIN
IN
N
LOOP: LOAD
I
COMPARE N
JUMPEQ
LAST
INC
I
LOAD
RES
ADD
I
STORE RES
JUMP
LOOP
LAST:
OUT
RES
HALT
N:
.DATA
0
I:
.DATA
1
RES:
.DATA 1
.END69
Example DM Object
Programs
1101000000010000
Address
0000
0001
0010
0011
0100
0101
0110
0111
1000
IN
X
1101000000010001
Machine Language
Instruction meaning
IN Y 0000000000010001
LOAD Y
0111000000010000
COMPARE X
1001000000000111
JUMPGT
XGT
1110000000010001
OUT Y
1000000000001000 XGT: JUMP
HALT
1110000000010000
70
OUT X
General Patterns
 Expressions
 Load values
 Perform arithmetic operations
 Store temporary results if necessary
 Assignment
 Evaluate expression – value is in register
 Store to variable
71
General Patterns
 Selection
 Load and Compare values to set
condition code
 Jump on condition to then
<else part>
Jump to next
then:
<then part>
next:
General Patterns
 Repetition
 loop: compare values to set condition codes
Jump on end condition to next
<body of loop>
Jump to loop
next:
 Basic statement forms of a structured
programming language (like Java) can be
represented with patterns in assembly
73
Assemblers
 Assemblers translate source (assembly
language) into object (binary) code
 must translate symbolic opcodes to binary
op-codes
 must translate symbolic addresses to binary
addresses
 Two Pass Assembler
 Pass 1 - Associate symbolic labels with
address offsets (symbol table)
 Pass 2 - Associate opcodes and build output
74
Assembler
 Pass 1 - Associate symbolic labels with
address offsets
 Count from beginning of file.. address 0
 Add (label, binary address) pairs to symbol
table
 Report duplicate labels as error
 Pass 2 - Translate assembly code to binary
 Symbolic opcodes to binary
 Symbolic addresses (labels) to binary
 Report unknown addresses and opcodes as
errors
Assembler Example
Source
.BEGIN
IN X
IN Y
LOADX
AG:
ADD Y
STORE
RE
OUT
RE
HALT
X:
.DATA0
Y:
.DATA0
:
.DATA 0
.END
Symbol
Table
(X,
000...111)
(Y,
000..1000)
(RE,
000..1001)
(AG:
000…011)
Object
1101000000000111
1101000000001000
0000000000000111
0011000000001000
0001000000001001
1110000000001001
1111000000000000
0000000000000000
0000000000000000
0000000000000000
Symbol Tables
 Core element of all language translators
 maintain information about variables,
names, labels
 Two symbol tables in an assembler
 Symbolic address to binary address table
 Symbolic opcode to binary opcode table
77
Symbol Tables
 Format of table
 Elements are pairs: symbol and binary value
 Retrieved by symbol as key
 Implementation affects performance
 List is O(n) time
 Sorted array is O(log n) time
 Hash table is O(1) time
Midterm Review
 Correctness and Complexity of Algorithms
 Binary Numbers, Boolean Logic, Von Neumann
Architecture
 Virtual Machine, Assembly Language and Assembler
Example Questions:
http://www.cs.uoregon.edu/classes/09S/cis212/midtermhelp.html
Computer Science and Algorithms
 I. What is Computer Science?
 Gibbs and Tucker: The study of algorithms,
including their formal and mathematical properties,
hardware realizations, linguistic realizations and
applications
 II. What are Algorithms and Problems?
 Algorithm: a well ordered collection of unambiguous,
computable steps that produce a result when executed
and halt in a finite amount of time
 Drawbacks of Natural Language and High level
programming languages
 Better solution: Pseudo code
80
Characteristics in an algorithm
 Correctness (e.g., Loop Invariant)
 Ease of understanding (clarity)
 Elegance: How clever or sophisticated is the
algorithm?
 Efficiency (Complexity)
81
Algorithm Correctness: Loop
Invariants
 A loop invariant is a property (statement)
that is true
 Before the loop (after initialization)
 Before every iteration of the loop
 After the loop completes
82
Correctness with Loop Invariants
 To prove an iterative algorithm is correct:
 Show invariant true before loop starts
 Show loop execution preserves invariant
 Show the loop terminates
 Show loop termination condition plus the
invariant implies post condition
83
Insert
void insert(int [] a, int i) {
// Precondition: a[0 .. i-1] is sorted
int j = i;
while ((j > 0) && (a[j] < a[j-1])) {
// Invariant: a[0..j-1] and a[j..i] are
sorted
int temp = a[j];
a[j] = a[j-1];
a[j-1] = temp;
// exchange a[j] and a[j-1]
j--;
// Invariant: a[0..j-1] and a[j..i] are
sorted
}
// halting condition: j==0 or a[j-1]
<= a[j]
}
// Postcondition: a[0..i] is sorted
Complexity of Algorithms
 Time
 Number of steps executed
Assignments, tests, comparison,
arithmetic operations, etc.
 Space
 Amount of memory needed
 Variables, arrays, lists, dynamic objects

 How do the measures change with a problem's
size?
85
Binary Search Algorithm
 Time Complexity
 Best case

1 comparison

(1)
 Worst case

lg n comparisons (m times comparisons, 2m = n, m = lg n)


lg n: The number of times n can be divided by two before
reaching 1
(lg n)
Complexity Classes
 O(1) -- constant
 O(log n) -- logarithmic
 O(n) -- linear
 O(n log n)
 O(n2) -- quadratic
 O(nk) -- polynomial
 O(kn) -- exponential
Big-O Complexity
 An algorithm of complexity f(n) is said to be of
order g(n), denoted O(g(n)) if there is a constant c
such that f(n) ≤ cg(n) for all n > n0
 Prove: n2+n is O(n2) (by definition)
 Let c=2, let n0=1, then cn02 = 2 ≥ n02 + n0 = 2
 For n > 1, n2 + n2 > n2 + n
 Prove: 2n3 + 10n2 + 1000 is O(n3)
 Let c=1012 and let n0=1
88
Big-O Analysis
 Relation of common functions:
 O(1) < O(log2n) < O(n) < O(n·log2n) < O(n2) <
O(n2·log2n) < O(n3) < O(2n) < O(n!) (factorial)
 Properties of Big-O
 ignore all constants
 O(f(n)) + O(g(n)) = O(f(n) + g(n)) = O(max[f,g])
 O(f(n)) * O(g(n)) = O(f(n) * g(n))
89
Iterative Complexity

Assume do-x, do-y, and do-z have complexities O(1), O(n), and
O(n·log n).
void do-it(int [] a) {
O(1)
O(n)
O(n)
//a has size n
for (int i=0;
O(n2)
O(n)
O(n)
O(n2)
O(n2)
O(n2)
O(1)
O(1)
O(n)
O(1)
j<n; ++j) {O(1)
O(n)
i<n; ++i) {
if ((i%2)==0) { for (int j=0;
O(n2)
O(n2)
O(1)
{do-x(a);
do-y(a); }
do-z(a);}
else { for (int k=0;
k<n; ++k)
{do-z(a);
do-y(a); }}}
90
O(n2)
O(1)
O(n2)
O(n)
O(n3)
O(n·log n)
O(n2·log n)
O(1)
O(n)
O(1)
O(n2)
O(n·log n)
O(n3·log n)
O(n)
O(n3)
Selection in a Loop
 Consider how often each branch is taken
 is )
O(n)
for (int i=0; i<n; ++i) {
O(1)
if ((i%2) == 0)
O(1)
 a
x += f(n);
// assume f(n)’s
O(n)
O(n2)
2
complexity
Assume f(n) is O(n ) and g(n) is O(n·log n)
else
O(n)
O(n·log n)
x += g(n);
// assume g(n)’s
complexity
O(n)
O(n)
O(n)
O(1)
for (int i=0; i<n; ++i) {
if (i < 10)
O(n)
O(n)
O(n3)
O(n2·log n)
O(1)
O(n)
O(1)
O(n)
O(n2)
O(n2)
O(n·log n)
O(n2·log n)
x += f(n);
O(n)
else
x += g(n);
91
Hardware
 The binary algebra
 Boolean logic and gates
 Building computer circuits
 Control circuits
Postive, Negative, Negative zero
 Simple way: leftmost bit 0 => positive, 1 => negative
1 1 1 0 0 0 1 (-49) 0 0 0 0 0 1 1 ( +3)
question: how computer know (1 1 1 0 0 0 1) is -49 or
+113? It depends on context (signed and unsigned)
 (10000) is (-0) and equal to (00000)? => Two’s
complement representations:
1 0 0 0 0 => - (complementary of unsigned part (0
0 0 0) + 1) = - ((1 1 1 1) + 1) = - (1 0 0 0 0) = - 16
1 0 0 1 1 => - ( (1 1 0 0) + 1 ) = - (1 1 0 1) = - 13
1 0 0 1 1 + 0 1 1 0 1 = 0 0 0 0 0 ( omit the last carry )
Binary Representation of
Fractional Numbers
-.111 x 2-1
1
Sign of
mantissa
111000000
Mantissa
(9 bits)
1
00001
Sign of Exponent
Exponent (5 bits)
1111000000100001
Figure 4.15
The Three Basic Gates, Their Symbols and Truth Table
Figure 4.24
The 1-ADD Circuit and Truth Table
An Addition Circuit (continued)
si = ((NOT(ai)) AND (NOT(bi)) AND ci)
OR ((NOT(ai)) AND bi AND (NOT(ci)))
OR (ai AND (NOT(bi)) AND (NOT(ci)))
OR (ai AND bi AND ci)
ci+1 = ((NOT(ai)) AND bi AND ci)
OR (ai AND (NOT(bi) AND ci)
OR (ai AND bi AND (NOT(ci)))
OR (ai AND bi AND ci)
Figure 4.29
A 2-to-4 Decoder Circuit
Computer System
 The components of a computer system
 ALU, CU, Memory, I/O
 Machine Language and Putting all the pieces
together—the Von Neumann architecture
 Non-Von Neumann architectures
Figure 5.18
The Organization
of a Von Neumann
Computer
00001001
ADD
0000000001100011 0000000001100100
X,
Y
Figure 5.14
Typical Machine Language Instruction Format
Non-Von Neumann
Architectures
 Non-Von Neumann architectures explored to
bypass the physical limitations
 Parallel computing architectures can provide
improvements; multiple operations occur at the
same time
 SIMD: single instruction stream/multiple data stream
 MIMD: multiple instruction stream/multiple data stream

Cluster computing and Grid computing
Virtual Machine, Assembler
 System Software and Virtual Machine
 Von Neumann architecture is not user oriented
 Assembly Languages
 Duck Machine and Assembler
 Operating System
Figure 6.1
The Role of System Software
Figure 6.2
Types of System Software
Figure 6.3
The Continuum of Programming Languages
106
Duck Machine
 A simple single register machine
 16 bit words
 4 bits instruction, 12 bits address
 16 bits of integer data, two’s-complement form
 One register R
 Program counter PC and instruction register IR
 Condition code CC (values GT, EQ, LT)
107
Duck Machine Instructions
 Each instruction (except halt) has one
memory address as its operand
Opcode
Instruction
Opcode
Instruction
0
Load from memory
1000
Jump to address
1
Store to memory
1001
Jump if GT
10
Clear (zero) memory
1010
Jump if EQ
11
Add memory to register
1011
Jump if LT
100
Increment memory
1100
Jump if not EQ
101
Subtract memory from register
1101
Input to address
110
Decrement memory
1110
Output from address
111
Compare memory to register
1111
Halt
108
Duck Machine Assembly Language
 Mnemonics for binary opcodes, symbolic
addresses
 Any instruction may be labeled by symbolic
address
ASM
Instruction
ASM
Instruction
LOAD
Load from memory
JUMP
Jump to address
STORE
Store to memory
JUMPGT
Jump if GT
CLEAR
Clear memory
JUMPEQ
Jump if EQ
ADD
Add register to memory
JUMPLT
Jump if LT
INC
Increment memory
JUMPNEQ
Jump if not EQ
SUB
Subtract register from memory
IN
Input to address
DEC
Decrement memory
OUT
Output from address
COMPARE
Compare memory to register
HALT
Halt
109
Assembler Example
Source
.BEGIN
IN X
IN Y
LOADX
AG:
ADD Y
STORE
RE
OUT
RE
HALT
X:
.DATA0
Y:
.DATA0
:
.DATA 0
.END
Symbol
Table
(X,
000...111)
(Y,
000..1000)
(RE,
000..1001)
(AG:
000…011)
Object
1101000000000111
1101000000001000
0000000000000111
0011000000001000
0001000000001001
1110000000001001
1111000000000000
0000000000000000
0000000000000000
0000000000000000