Kelly_presentation

Download Report

Transcript Kelly_presentation

National Center for Research in
Advanced Information and
Digital Technologies
Henry Kelly
Federation of American Scientists
NITRD Briefing
September 16, 2008
The National Center for Research in
Advanced Information and Digital
Technologies is part of the
reauthorization of the Higher
Education Act (section 802)
approved by Congress on July 31,
2008, and signed into law by
President Bush on August 14, 2008.
Purposes
• Research, development and demonstrations of learning
technologies that could include simulations, games,
virtual worlds, intelligent tutors, performance-based
assessments, and innovative approaches to pedagogy
that these tools can implement.
• Design and testing of components needed to build
prototype systems.
• Research to determine how these new systems can best
be used to build interest and expertise in learners of
different ages and backgrounds.
Management:
• independent, nonprofit organization with its own Board of
Directors.
• can receive funding from any federal agency, from
private organizations
• The bill authorizes expenditure of funding from the
Department of Education; $50 million is being requested
for Fiscal Year 200
• Center staff will develop a research plan and ask for
competitive proposals. The research will be selected by
a peer review process.
• All material resulting from the research will quickly be
made freely and nonexclusively available to the public
(wavers that “would result in significant public benefits”
are possible but require unanimous Board approval
Instructional Design
•
•
Create authentic challenges, Problem-centered Learning
Continuous assessment of expertise (what can the
learner do?)
– Varied and Contrasting Examples
– Demonstration
– Practice opportunities
•
•
Provide relevant information where and when it’s needed
(automated & human)
Reflection
•
Feedback
•
Assessment
•
Skills Refreshment
Bransford/ Jonassen, D. H./Hannafin,/M. J., Land/ S., & Oliver, K.
Game Features Attractive for
Learning
• Authentic motivating challenges motivates time
on task
• Personalization
• Continuous assessment (and the right to fail)
• Contextual bridging closes gap between what is
learned and its use
• Scaffolding provides cues, hints to keep learner
progressing
Inquiry Management
•
Stimulate deep questions (failing to achieve a compelling goal can do this)
• A good answer depends on
 Technical accuracy
 Knowledge about the person asking
 Knowledge of the context of the question
 An instructional strategy (answer with another question?)
• Response includes knowledge of:
 Content
 Individual learner
 Context
 Pedagogical strategy
• Multimedia questions and responses (e.g. “what’s that?” [points at a
cell])
• Mixture of artificial and human intelligence
Graesser and Person/Beck, I.L., McKeown, M.G., Hamilton, R.L.,
& Kucan, L. /Miyake, N. & Norman, D.A.
Assessment
• Measures of expertise that can form the basis of
competitive approaches
• Measures authentic to learners, employers,
instructors
• Continuous, multi-dimensional assessments of
content mastery (how would an expert behave)
• Measures competence using a challenge that
makes sense to the learner, instructor, and
employers
• Performance based
• Reproducible
“two years ago everybody would show up on Monday and they
graduate from school two months later. Not anymore”
“We are moving to performance based testing as quickly as we can.”
VADM Kevin Moran, Commander,
Naval Education and Training Command 2006
Time to Train Results
27 May 2005
300
Legacy
S
t
u
d 200
e
n
t 100
s
0
Time to Train
30Sep2004
Active
Learners
Passive
Learners
18Jun2004
25%
Factors: Very independent, previous work / higher
education, tend to higher ASVAB
Note: Some low ASVAB scorers in this range
50%
75%
100%
Factors: Desire continuous direction, weak work
experience, tend to lower ASVAB
Note: Some high ASVAB scorers in this range
Population (n)= 11836 (ATT: 10554 IC: 283 GM: 334 TM: 140 FC: 271 ET: 254)
Evaluation
• Explore gains in deep expertise
• Know where and when to use the new tools
(what groups, what concepts)
• Group AND individual evaluation
• Diverse demographics
• Continuous feedback to research teams
Use Cases
Using a World
Build a World
Instructor/Team leader
3D object producers
Static
Objects &
associated
metadata
Converters
Reviewers
Role Players
Learners
Experts, Museums, Archives,…
Tutors
Players
Experts
Visitors
AI
characters &
associated
metadata,
scripts,
behaviors
etc.
Data on
team
Production
schedules.
Learning
modules
Learning Management
Systems
Performance
tests
User ratings
Student
records
Converters
Converters
Faculty
avatars
http://vworld.fas.org/wiki/Main_Page