Table 11-1: Features of On-line Communities

Download Report

Transcript Table 11-1: Features of On-line Communities

On-Line Communities



Webster's New World Dictionary of the
American Language defines "community" as
"people living in the same district, city, etc.,
under the same laws."
In cyberspace, community can be described
as “synchronous on-line settings” (White,
2002).
LambdaMOO Community – A “rape” in
cyberspace.
Table 11-1: Features of
On-line Communities
Positive Features
Negative Features
Empower individuals by giving
them choices regarding
community membership
Can easily discourage face-toface interaction between
individuals
Enable people living in
geographically remote locations
to interact regularly as members
of the same community
Can facilitate anonymity, making
it easier to perform morally
objectionable acts that are not
tolerated in physical
communities
Tend to provide individuals with
greater freedom
Tend to increase social and
political fragmentation
Democracy and the Internet


Does the Internet facilitate democracy
and democratic ideals?
Should the Internet be used as a tool to
promote democracy?
Sunstein’s Argument for why the Internet does not
promote “Deliberative Democracy









Because individuals use filtering schemes that provide them with
information that
(a) reinforces ideas that they already hold and
(b) screens out novel information and different points of view, and
Because an increasing number of people get their information only
from the Internet,
The Internet will likely:
(c) insulate more and more people from exposure to new ideas as well
as to ideas that may question or conflict with their own, and
(d) lead to greater isolation and polarization among groups, and
(e) encourage extremism and radicalism rather than fostering
compromise and moderation, and
(f) reduce the need for the traditional give-and-take process in
resolving differences in a public forum.
Therefore, behavior facilitated by the Internet tends to undermine
deliberative democracy and corresponding democratic ideals.
.

Graham’s critique




The Internet might, perhaps unwittingly,
strengthen the "worst aspects" of democracy,
because Internet technology facilitates:
(i) political and social fragmentation;
(ii) irrationality (i.e., irrational prejudice in
"direct democracies");
(iii) powerlessness (in "representative
democracies").
Table 11-2: Considerations for Using
the Internet to Promote Democracy
Advantages
Disadvantages
Empowers individuals by giving
them choices regarding on-line
communities
Increases social fragmentation
and discourages rational debate
Promotes individual freedom and
decision-making
Increases levels of irrationality
and prejudice (in direct
democracies)
Gives individuals a voice in
governance issues in cyberspace
Increases levels of powerlessness
for individuals (in representative
democracies)
Virtual Reality




Three different senses of “virtual.”
Sometimes "virtual" is contrasted with "real,"
as in cases where virtual objects are
distinguished from "real" objects.
Other times, "virtual" is contrasted with the
term "actual." For example, a person might
say that she is "virtually finished" her project.
A third use of "virtual" ca express a feeling
that one has "as if" he or she were physically
present in a situation.
Virtual Reality Technologies





Brey (1999) defines virtual reality (VR)
technology as “a three-dimensional
interactive computer-generated environment
that incorporates a first-person perspective.”
Three important features in Brey's definition
of VR technology are:
(1) interactivity;
(2) the use of three-dimensional graphics;
(3) a first-person perspective.
Figure 11-1: Virtual
Environments
Virtual Environment
On-line Communities
Electronic forums,
MOOs, MUDs, etc.
(can be two-dimensional
representations that are
text-based)
VR Technologies
VR games, VR
applications/models,
etc. (must be threedimensional graphical
interfaces)
Figure 11-2 Summary of Brey's Scheme
for Analyzing Ethical Issues in VR
Ethical Aspects of VR
Behavioral issues
in VR
environments
(Interactivity)
Representational issues
of the non-virtual
entities
being depicted in VR
applications
Example: the LambdaMOO case
Misrepresentation
Biased Representation
Virtual entities fail to
correspond accurately
to non-virtual entities
represented (distortion in
representation).
Virtual entities are accurate
in terms of characteristics
represented, but are
presented in a way that
reflects a bias.
Personal Identity and
Cybertechnology



Van Gelder (1996) “The strange case of
the electronic lover.”
Turkle (1984) – the computer as a
“medium of self discovery.”
Turkle (1995) – “MUDs, “MUD-Selves,
and Distributed Personal Identities.”
Self-Expression and SelfDiscovery

Turkle (1984) notes that (standalone
computers) enabled people to try out:




new ways of expressing themselves;
new cognitive styles;
different methods of problem solving.
Turkle (1995) argues that computers
have since moved from being mere
“calculators” to “simulators.”
“MUD Sleves” and Distributed
Personal Identities

MUDs (Multi-User Dimensions).


Lambda MOO is a variation of MUD.
In MUDs, people can express “multiple
identities” – a person can be:



one’s actual self;
male, female; young, old, etc.;
even a non-human such as a “furry rabbit.”
MUD Selves (Continued)




Turkle note that the “self” can be the
“sum of one’s distributed presence.”
In Victor, Victoria (the physical world),
one moved in and out of gender roles
by “stepping in and out of character.”
In MUDS, people have parallel lives
“Real Life “ or (RL) is just one window.
Our Sense of “Self” in the Cyber
Era

Three great eras or epochs:
1. The Agricultural Age;
2. The Industrial Age;
3 The information age.

What are the impacts for the Cyber era?



Self in the Cyber era (continued)

Williams (1997) considers the impacts of three
important discoveries and describes their significance
in the following way:
The first such milestone, a great (and greatly humbling) challenge to
our sense of human beings as uniquely important, came when the
Copernican revolution established that Earth, the human home, was
not at the center of the universe. The second milestone was Charles
Darwin's conclusion that emergence of Homo sapiens was...the result
of evolution from lower species by the process of natural selection. The
third milestone resulted from the work of Karl Marx and Sigmund
Freud, which showed intellectual, social, and individual creativity to be
the result of non-rational (unconscious) libidinal or economic forces –
not as has been believed, the products of the almost god-like powers
of the human mind.
Cyber-technology as a
"Defining Technology"





Bolter (in Turing’s Man, 1984) describes
the Western Culture in terms of three
periods:
(1) Plato’s Man;
(2) Descartes’s Man;
(3) Turing’s Man.
Each is the result of what Bolter
describes as a “defining technology.”
Artificial Intelligence (AI)



The view that only humans are rational is currently
challenged on two separate fronts:
1. recent research in animal intelligence suggests that many
primates, dolphins, and whales are capable of demonstrating
skills we typically count as rational (while many humans are not,
or are no longer able, to demonstrate those skills);
2. recent work in artificial intelligence (AI) and cognitive science
has shown that certain forms of "rational activity" can also be
attributed to computers.

In fact, questions that have surfaced in AI research have already
caused some philosophers and scientists to reconsider our
definitions of notions such as rationality, intelligence, knowledge,
and learning.
Can Machines Think and are they
Intelligent?



1950, Alan Turing posed a question that
has come to be known as the Turing
Test.
HAL (2001: A Space Odyssey) seemed
to exhibit some intelligence.
Deep Blue defeated Gary Kasparov.
Expanding the Sphere of Moral
Obligation because of AI


Do we need to expand the sphere of
moral obligation to include “softbots”
and “information entities”?
Can computers be morally responsible
agents?
Should we Continue to Research
in AI?

John Weckert asks:
Can we, or do we want to, live with artificial
intelligences? We can happily live with fish that swim
better than we do, hawks that see and fly better, and
so on, but do we want things that can reason better
to be in a different and altogether more worrying
category….What would such [developments mean
for] our view of what it is to be human?
Nanotechnology





Nanotechnology, a term coined by K. Eric Drexler in the 1980s.
A is a branch of engineering dedicated to the development of
extremely small electronic circuits and mechanical devices built at the
molecular level of matter.
Current microelectricomechanical systems (or MEMS), tiny devices such
as sensors embedded in conductor chips used in airbag systems to
detect collisions, are one step away from the molecular machines
envisioned in nanotechnology.
A primary goal of this technology is to provide us with tools to work at
the molecular and atomic levels that are analogous to what we have at
the macroworld level.
Drexler (1991) believes that developments in this field will result in
computers at the nano-scale, no bigger in size than bacteria, called
nanocomputers.
Nanotechnology (continued)





To appreciate the scale of future nanocomputers, imagine a
mechanical or electronic device whose dimensions are measured
in nanometers (billionths of a meter, or units of 10-9 meter).
Nanocomputers could have "mass storage devices that can store
more than 100 billion bytes in a volume the size of a sugar
cube.”
Merkle (2001) predicts that these nano-scale computers will be
able to “deliver a billion billion instructions per second – a billion
times faster than today’s desktop computers.”
Although they are still in an early stage of research-anddevelopment, some primitive nano-devices have already been
tested.
In 1989, physicists at the IBM Almaden Laboratory
demonstrated the feasibility of development in nanotechnology
by manipulating atoms to produce the IBM logo.
Pros of Nanotechnology




Nano-particles inserted into bodies could diagnose diseases and
directly treat diseased cells.
Doctors could use nanomachines (or nanites) to make
microscopic repairs on areas of the body that are difficult to
operate on with conventional surgical tools. (with
nanotechnology tools, the life signs of a patient could be better
monitored.
With respect to the environment, nanites could be used to clean
up toxic spills, as well as to eliminate other kinds of
environmental hazards.
Nanites could also dismantle or "disassemble" garbage at the
molecular level and recycle it again at the molecular level via
"nanite assemblers."
Worries about Nanotechnology






Since all matter (objects and organisms) could theoretically be disassembled and
reassembled by nanite assemblers and disassemblers, what would happen if
strict "limiting mechanisms" were not built into those nanites?
If nanomachines were created to be self-replicating and if there was a problem
with their limiting mechanisms, they could multiply endlessly like viruses.
Nanite assemblers and disassemblers could be used to create weapons or that
nanites themselves could be used as weapons. As Chen (2002) points out, guns,
explosives, and electronic components of weapons could all be miniaturized.
Privacy and freedom could be further eroded because governments, businesses,
and ordinary people could use molecular sized microphones, cameras, and
homing beacons to track and monitor people.
People with microscopic implants would be able to be tracked using Global
Positioning Systems (GPS), just as cars can be now.
On the one hand, children could never be lost again; on the other hand, we
would likely have very little privacy given that our movements could be tracked
so easily by others.
Ethical Aspects of
Nanotechnology




Already there are controversies about bionic chip
implants made possible by nanotechnology.
Weckert points out that while "conventional" implants
in the form of devices designed to "correct"
deficiencies have been around and used for some
time, their purpose has been viewed as one of
assisting patients in their goal of achieving "normal"
states of vision, hearing, heartbeat, etc.
These are described as “therapeutic implants.”
Future chip plants, in the form of "enhancement
implants" could be designed to make a normal
person super-human.
Implants Involving
Nanotechnology




Some frame the controversy about implants in terms
of an “enhancement vs. therapy” debate.
Moor (2003) points out that this distinction might
suggest the basis for a policy that would limit
unnecessary implants.
He also notes that because the human body has
“natural functions,” some will argue that implanting
chips in a body is acceptable as long as these
implants “maintain and restore the body’s natural
functions.”
Although Moor does not argue for a policy along the
lines of a therapeutic-enhancement distinction, he
believes that many will find such a policy would
appeal to many.
Implants (Continued)





According to Moor (2004):
Pacemakers, defibulators, and bionic eyes that
maintain and restore natural bodily functions are
acceptable.
But giving patients added arms or infrared vision
would be prohibited.
It would endorse the use of a chip that reduced
dyslexia but would forbid the implanting of a deep
blue chip for superior chess play.
It would permit a chip implant to assist memory of
Alsheimer patients but would not license implanting
of a miniature digital camera that would record and
playback what a person had just seen.
Implants (Continued)




Clear policies and laws will need to be framed
needed, as more and more bionic parts become
available.
Some now worry that with bionic parts, humans and
machines could soon begin to merge into cyborgs.
Kurzweill (1999) has suggested that in the near
future, the distinction between machines and humans
may no longer be useful.
Moor (2004) believes the question we must
continually reevaluate is “not whether we should
become cyborgs, but rather what sort of cyborgs
should we become.”
Implants (Continued)


We need to assess some of the
advantages and disadvantages of bionic
implants of the future.
Weckert (2002) invites us to consider
the following question:
“Do we want to be ‘superhuman’ relative to our
current abilities with implants that enhance our
senses, our memories, and our reasoning ability?
What would such implants do to our view of what it
is to be human?”
Should Research in
Nanotechnology Continue?




Weizenbaum (1984) has argued that there are certain kinds of
computer science research that should not be undertaken –
specifically, research that can easily be seen to have
"irreversible and not entirely unforeseeable side effects.“
Joy (2000) has suggested that because developments in
nanotechnology are threatening to make us an "endangered
species," the only realistic alternative is to limit the development
of that technology.
Merkle (2001) disagrees with Joy, arguing that if research in
nanotechnology is prohibited, or even restricted, it will be done
underground.
If that happens, Merkle worries that nanotechnology research
would not be regulated by governments and social policies.
Should Research Continue in
Nanotechnology?



Weckert (2001) argues that, all things being equal,
potential disadvantages that can result from research
in a particular field are not in themselves sufficient
grounds for halting research altogether.
He suggests that there should be a presumption in
favor of freedom in research.
Weckert also argues, however, that it should be
permissible to restrict or even forbid research where
it can be clearly shown that harm is more likely than
not to result from that research.
Should Research Continue in
Nanotechnology?

Weckert offers us the following strategy:
If a prima facie case can be made that some research will likely
cause harm...then the burden of proof should be on those who
want the research carried out to show that it is safe.

He goes on to say, however, that there should be:
...a presumption in favour of freedom until such time a prima
facie case is made that the research is dangerous. The burden
of proof then shifts from those opposing the research to those
supporting it. At that stage the research should not begin or be
continued until a good case can be made that it is safe.
Future Considerations Involving
Nanotechnology





A model similar to the one used in the Human Genome Project
might be appropriate here.
Before work was authorized to proceed on that project, certain
ethical, legal, and social implications (ELSI) had to be addressed
and formal ELSI guidelines established.
Genomic research on that project was able to continue only
after the ELSI requirements were in place.
A similar set of ethical guidelines could help direct research in
nanocomputing and could guide computer professionals
currently engaged in research in that field.
All of us – as members of the human race – would benefit from
clear guidelines that address moral issues involving future
developments in nanocomputing.