INDUSTRY: Kahaner Report: Tele-existence work at RCAST/U-Tokyo
92.03.09
========
From: rick@cs.arizona.edu (Rick Schlichting)
Subject: INDUSTRY: Kahaner Report: Tele-existence work at RCAST/U-Tokyo
Date: Mon, 9 Mar 92 22:19:59 MST
Crossposted from comp.research.japan (thanks, Rick):
[Dr. David Kahaner is a numerical analyst on sabbatical to the
Office of Naval Research-Asia (ONR Asia) in Tokyo from NIST. The
following is the professional opinion of David Kahaner and in no
way has the blessing of the US Government or any agency of it. All
information is dated and of limited life time. This disclaimer should
be noted on ANY attribution.]
[Copies of previous reports written by Kahaner can be obtained using
anonymous FTP from host cs.arizona.edu, directory japan/kahaner.reports.]
From: David K. Kahaner, ONR Asia [kahaner@cs.titech.ac.jp]
Re: Tele-existence work at RCAST/U-Tokyo (S. Tachi)
9 March 1992 This file is named "tachi.lab"
ABSTRACT. Tele-existence work at S. Tachi's University of Tokyo lab is
described.
INTRODUCTION.
In a recent report (19 Feb 1992, 3d-2-92.1) I described a paper by
Professor S. Tachi on tele-existence. His work seemed very interesting
and I decided to visit and look in person.
Professor Susumu Tachi
Research Center for Advanced Science and Technology
University of Tokyo
4-6-1 Komaba, Meguro-ku, Tokyo 153 Japan
Tel: +81-3-3481-4467, Fax: +81-3-3481-4469, -4580
Email: TACHI@TANSEI.CC.U-TOKYO.AC.JP
Prof Tachi moved to U-Tokyo several years ago from the Mechanical
Engineering Laboratory in Tsukuba. MEL is run by AIST (Agency of
Industrial Science and Technology), which is part of MITI. About ten
years ago Tachi also spent a year at MIT.
Tachi began our conversation by showing me a chart describing the
different threads of research that are now converging to be part of what
is currently called Artificial (or Virtual) Reality, AR (VR). Below I
have extracted the text and reorganized it to better fit in the format
of this report. Looking backward it is fairly obvious that these
projects shared many common elements, although it is doubtful if the
researchers themselves thought in these terms.
Virtual console
(mouse, 3D mouse, virtual window,...)
Real time interactive 3D computer graphics
(computer graphics, 3D CG,...)
Virtual products
(CAD, 3D CAD, interactive 3D CAD,...)
Cybernetic interface
(man-machine human interface,...)
Responsive environment
(3D video/holography and art, interactive video and art,...)
Real time interactive computer simulation
(computer simulation, real time computer simulation,...)
Communication with a sensation of presence
(telephone, tele-conference,...)
Tele-existence/telepresence
(teleoperation, telerobotics,...)
Research into some of these topics began as early as the 1960s, for
example with Ivan Sutherland's computer graphics projects. The early
1980s saw rapid growth due to work by Furness, Kruger, Sheridan, and
others. Currently in the US, centers of excellence are at the
University of Washington, MIT, and UNC in the academic world, NASA and
NOSC within the US government, and at several small companies that are
marketing products. (This list is meant to be suggestive, not
exhaustive.)
Tachi's work was also mentioned in my report on virtual reality, vr.991,
9 Oct 1991. That report references a conference held July 1991, on AR
and Tele-existence. The proceedings of this conference have just
appeared and I have one copy. It is entirely in English and I recommend
it highly to anyone interested in getting a quick survey of the current
research activities in Japan. The proceedings also contain an
interesting panel discussion titled, "Will VR Become a Reality?" Titles,
and abstracts from this conference are appended to this report.
Finally, note that a newsgroup, "sci.virtual-worlds" is operating. Tachi
showed me the list of recent postings, and it is clear that this is a
very active communication vehicle.
Last year, I wrote a report summarizing a survey from the Japanese
Economic Planning Agency on technology through the year 2010 (see,
2010.epa, 27 Sept 1991). VR was one of the topics discussed there. In
the complete Japanese report more explanatory detail was presented.
The EPA estimates that practical realization will be sometime around
2020 (this seems further downstream than I would estimate). EPA
continues "Regarding the comparison of various countries' R&D efforts at
the present juncture in the VR field, both Japan and the US are actively
working on research and development of the system. At this particular
point, however, the US is more actively pursuing the system research. In
the US, for instance, the aerospace industry's R&D projects and
national-level projects in support of the industry, are conducting
truly large-scale simulations and are constructing excellent databases.
"Key technologies requiring breakthrough will be those used in
development of 3D simulation models, supercomputer application
technology, and self-growth-database technology.
"One form of social constraint standing in the way of practical
utilization of VR will be the effects of its possible application to
transportation systems as a part of the social infrastructure [sic].
Moreover, it will be influenced by the general public's perception and
value judgments regarding the system. Economic constraints will affect
business's attempts to establish a market and to drive costs down.
Moreover, the need for securing technical specialists in the software
development field and the resultant shortage in R&D funds, as well as
other difficulties involved in recruiting R&D personnel must be dealt
with.
"VR's market scale is estimated as reaching approximately the 1 trillion
yen level. Accompanying this market, there will be an increase in the
number of related industrial firms' research labs to as many as 100.
Software applications fields will be extensive, and a large number of
R&D divisions will be pursuing product research and development targeted
for various social strata.
"Positive impacts created by VR on the industrial economy will include
the emergence of a new simulation industry (which probably will maintain
supercomputers), and revitalization of computer software-houses,
computer industries, entertainment industry, and the aerospace industry.
The secondary effects will be experienced by the information industry,
the general communication-related industry, the publishing industry,
newspaper and magazine businesses, and by TV and radio broadcasting.
"Negative impacts will be felt by industries which have tended to hold
on to hardware-oriented products."
The point of this note is not to summarize VR work generally, but only
to describe the specific directions being taken at one lab. Tachi uses
the term "Tele-existence" to denote the technology which enables a human
to have a real sensation of being at another place, and enabling him/her
to interact with the remote environment. The latter can be real or
artificial. This is clearly an extension of the sensation we have when we
use a joystick to move around the figures in a video-game. In the US,
the term is "Telepresence".
Related to this, is work to amplify human muscle power and sensing
capability by using machines, while keeping human dexterity and the
sensation of direct operation. This work stems at least from the 60s
with developments of exoskeletons that could be worn like a garment, yet
provide a safe but strong environment. Those projects were not
successful; damage to the garment would endanger the wearer. Further,
the technology at that time did not allow enough room for both the human
and the equipment needed to control the skeleton.
Another idea was that of teleoperation or supervisory control. In this,
a human master moves and a "slave" (robot) is synchronized.
Tele-existence extends this notion, in that the human really should have
the sensations that he/she is performing the functions of the slave.
This occurs if the operator is provided with a very rich sensation of
presence which the slave has acquired.
In Tachi's lab, the tele-existence master-slave system consists of a
master system with visual and auditory presence sensation, a computer
system for control, and an anthropomorphic slave robot mechanism with an
arm having 7 degrees of freedom and a locomotion mechanism (caterpillar
track). The operator's head, right arm, right hand and other motion
such as feet, are measured by a system attached to the master in real
time. This information is sent to four MS DOS computers (386x33MHz with
coprocessors), and each computer generates commands to the corresponding
position of the slave. A servo controller governs the motion of the
slave. A six axis force sensor on the wrist joint of the the slave
measures the force and torque exerted on contact with an object, and
this measured signal is fed back to the computer in charge of arm
control via A to D converters. Force exerted at the hand when grasping
an object is also measured by a force sensor installed on the link
mechanism on the hand, and fed back to the appropriate computer via
another A to D converter. A stereo visual and auditory input system is
mounted on the neck of the slave, and this is sent back to the master
and displayed on a stereo display system in the helmet of the operator.
Many of the characteristics of the robot are similar to that of a
human, for example the dimensions and arrangement of the degrees of
freedom, the motion range of each degree of freedom, and the speed of
movement, and motors are designed so that the appearance of the robot
arm resembles a human's.
Measured master movements are also sent to an Silicon Graphics Iris
workstation. This generates two shaded graphic images which are applied
to the 3D display via superimposers. Measured pieces of information on
the master's movement are used to change the viewing angle, distance to
the object, and condition between the object and the hand in real time.
The operator sees the 3D virtual environment in his/her view which
changes with movement. Interaction can be either with the real
environment which the robot observes or with the virtual environment
which the computer generates.
There are many details of the system that are carefully explained in
Tachi's papers dating back to the mid 1980s and need not be repeated here.
I wondered about the choice of computer systems. Tachi commented that he
preferred DOS to Unix for the control computers because DOS made it
easier to process real time interrupts. On the other hand if workstation
performance is high enough interrupts can be handled in "virtually" real
time. The SG is fast enough to to the needed graphics as long as the
operator does not move his/her head too rapidly. Clearly, workstation
performance is an important consideration in real time computer
graphics.
Tachi demonstrated the system by sitting on a special "barber chair",
putting on the large helmet, inserting his right arm into a movable,
sling containing a grasping device that approximates the robot's arm.
His left arm holds a joystick that controls locomotion, forward, back,
right, left, and rotation of the robot. Once so configured he proceeded to
make the robot stack a set of three small cubes.
I tried it next. The helmet is large and bulky, but is equipped with a
small fan so that there is good air circulation. It is also heavy, but
is carried on a link mechanism that cancels all gravitational forces,
but not inertia, somewhat like wearing the helmet underwater. The color
display (one for each eye) is composed of two six-inch LCDs (720x240
pixels). Resolution is good, better than I expected, but not crystal
clear. Tachi explained that humans obtain higher apparent resolution by
moving their heads when looking at objects, and that the same effect
works in the helmet. He also explained that the 3D view has the same
spatial relation as by direct observation (this is one place where
workstation performance is needed), and tuning of the system in this
area is one of the things that Tachi and his colleagues have been
working on for almost ten years. Tachi claims that operators can use the
system for several hours without tiring or nausea; I am not sure if I
could last that long. Nevertheless, I was able, first try to move the
robot to within grasping distance of the table, lift the three small
blocks and stack them without dropping any. Training time, zero.
This is one of the most advanced experiments of its kind in Japan.
Tachi's lab is very close to downtown Tokyo and would be easy to reach.
Please refer to the reports referenced above, as well as the references
below for further descriptions of this project.
"Tele-existence (I): Design and Evaluation of a Visual Display with
Sensation of Presence", S. Tachi, et al, in Proceedings of RoManSy'84,
Udine, Italy, June 26-29 1984.
"Tele-existence in Real World and Virtual World", S. Tachi, et al, in
ICAR'91, (Fifth International Conference on Advanced Robotics), 19-22
June, Pisa, Italy.
-----------------------------------------------------------------
International Conference on Artificial Reality and Tele-Existence
9-10 July 1991, Tokyo Japan
Titles, authors, and abstracts of papers
The International Conference on Artificial Reality and Tele-Existence
July 1-10, 1991
Telepresence and Artificial Reality: Some Hard Questions
Thomas B. Sheridan (Massachusetts Institute of Technology, Cambridge, MA
02139, USA)
This paper proposes three measurable physical variables which
determine telepresence and virtual presence. It discusses
several aspects of human performance which might be
(differentially) affected by these forms of presence. The
paper suggests that distortions or filters in the afferent and
efferent communication channels should be further tested
experimentally for their effects on both presence and
performance. Finally it suggests models by which to
characterize both kinematic and dynamic properties of the
human-machine interface and how they affect both sense of
presence and performance.
A Virtual Environment for the Exploration of Three Dimensional Steady Flows
Steve Bryson (Sterling Software, Inc.)
Creon Levit (NASA Ames Research Center)
We describe a recently completed implementation of a virtual
environment for the analysis of three dimensional steady
flowfields. The hardware consists of a boom-mounted, six
degree-of-freedom head position sensitive stereo CRT system for
display, a VPL dataglove (tm) for placement of tracer particles
within the flow, and a Silicon Graphics 320 VGX workstation for
computation and rendering. The flowfields that we visualize
using the virtual environment are velocity vector fields
defined on curvilinear meshes, and are the steady-state
solutions to problems in computational fluid dynamics. The
system is applicable to the visualization of other vector
fields as well.
Visual Thinking in Organizational Analysis
Charles E. Grantham (College of Professional Studies, University of San
Francisco, San Francisco, CA, USA)
The ability to visualize the relationships among elements of
large complex databases is a trend which is yielding new
insights into several fields. I will demonstrate the use of
'visual thinking' as an analytical tool to the analysis of
formal, complex organizations. Recent developments in
organizational design and office automation are making the
visual analysis of workflows possible. An analytical mental
model of organizational functioning can be built upon a
depiction of information flows among work group members.
The dynamics of organizational functioning can be described in
terms of six essential processes. Furthermore, each of these
sub-systems develop within a staged cycle referred to as an
enneagram model. Together these mental models present a visual
metaphor of healthy function in large formal organizations;
both in static and dynamic terms. These models can be used to
depict the 'state' of an organization at points in time by
linking each process to quantitative data taken from the
monitoring the flow of information in computer networks.
Virtual Reality for Home Game Machines
Allen Becker (President, Reflection Technology, Waltham, Massachusetts,
USA)
No abstract
What Should You Wear to an Artificial Reality?
Myron W Krueger (Artificial Reality Corporation, Box 786, Vernon, CT
06066)
No abstract
Bringing Virtual Worlds to the Real World: Toward A Global Initiative
Robert Jacobson (Human Interface Technology Laboratory, Washington
Technology Center, FJ-15, c/o University of Washington, Seattle, WA 98195
USA)
Virtual worlds technology promises to greatly expand both the
numbers of persons who use computers and the ways in which they
use capability of the technology to satisfy these needs.
Independent development of virtual worlds technology has been
the norm for at least three decades, with researchers and
developers working privately to build unique virtual-worlds
systems. The result has been redundancy and a slow pace of
improvement in the basic technology and its applications. This
paper proposes a "global initiative" to coordinate and to some
extent unify R&D activities around the world, the quicker to
satisfy an eager market (that may not, however, stay eager for
long) and meet genuine human needs.
Virtuality (tm) - The Worlds first production Virtual Reality Workstation
Jonathan D. Waldern (W. Industries Ltd., ITEC House, 26-28 Chancery
St., Leicester, LE1 5WD UK.)
The history of the development of W. Industries Virtuality (tm)
technology is covered. Virtuality (tm) technology comprises
three key low cost components that enable W. Industries to
offer Turnkey Virtual Reality products to its customers. The
key elements of the technology are discussed including the
Expality (tm) computer, Animette (tm) simulation software and
Visette (tm) visor. Virtuality products are housed in two
separate enclosures the Stand-Up (SU) and Sit-Down (SD). A
variety of tools such as the Sensor and Feedback gloves enable
user interaction with the virtual environments. Virtuality^tm
products are used in commercial, scientific and leisure
applications.
Tele-Existence - Toward Virtual Existence in Real and/or Virtual Worlds
Susumu Tachi (RCAST, The University of Tokyo, 4-6-1 Komaba, Meguro-ku,
Tokyo 153, Japan)
Tele-existence is a concept named for the technology which
enables a human being to have a real time sensation of being at
the place other than the place where he or she actually exists,
and is able to interact with the remote and/or virtual
environment. He or she can tele-exist in a real world where
the robot exists or in a virtual world which a computer has
generated. It is possible to tele-exist in a combined
environment of real and virtual. Artificial reality or virtual
reality is a technology which presents a human being a
sensation of being involved in a realistic virtual environment
other than the environment where he or she really exists, and
can interact with the virtual environment. Thus tele-existence
and artificial reality are essentially the same technology
expressed in different manners. In this paper, the concept of
tele-existence is considered, and an experimental tele-
existence system is introduced, which enables a human operator
to have the sensation of being in a remote real environment
where a surrogate robot exists and/or virtual environment
synthesized by a computer.
Visualization Tool Applications of Artificial Reality
Michitaka Hirose (Department of Mechano-Informatics, the University of
Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo 113, Japan)
In the early stages of the history of computer development,
information processing capabilities of computers were not very
large, and only very limited capabilities such as texthandling
were assigned for peripheral I/O devices. Since then, efforts
to make computers more powerful and intelligent have been
continually progressing. As a result, a new and serious
problem has arisen as to how to interact with the advanced
computational results. Namely, I/O or peripheral devices have
become the "bottle necks" of information processing. Thus
recently, the need for communication channel with wider band
widths between the human operator and the computer has arisen.
For example, in the near future, continuous media such as live
video or audio, will become dominant in information processing,
rather than discrete media such as text consist of Ascii codes.
Artificial reality research is aiming in the same direction, to
give computers a greater capability of displaying information
vividly and provide communication channels with very wide band
widths between the human operator and the computer.
Consequently, visualization is one of the most important
application fields of artificial reality.
Several artificial reality research projects focused on
visualization are being conducted in the Systems Engineering
Laboratory at University of Tokyo. In this paper, the major
projects are introduced. First, research to develop a novel
I/O device for interacting with virtual 3D space generated by
computer are introduced. Second, as one application, software
visualization research is introduced. The key point of this
research is how to give shape in virtual space to software
which has logical characteristics but originally no spatial
shape. Third, the capability of experiencing non-existent
environments through artificial reality is discussed. For
example, world with very low light speed wherein Einstein
effect is visible can be demonstrated. Finally, the importance
of artificial reality in the field of visualization is re-
emphasized.
Virtual Work Space for 3-Dimensional Modeling
Makoto Sato (Precision and Intelligence Laboratory, Tokyo Institute of
Technology, 4259 Nagatsuta-cho, Midori-ku, Yokohama 227, Japan)
To develop a human interface for three-dimensional modeling it
is necessary to construct a virtual work space where we can
manipulate object models directly, just as in real space. In
this paper, we propose a new interface device SPIDAR (SPace
Interface Device for Artificial Reality) for this virtual work
space. SPIDAR can measure the position of a finger and give
force feedback to a finger when it touches the virtual object.
We construct a virtual work space using SPIDAR. We make
experiments to show the effect of force feedback on the
accuracy of work in the virtual work space.
Force Display for Virtual Worlds
Hiroo Iwata (Institute of Engineering Mechanics, University of Tsukuba
(Tsukuba 305, Japan)
In proportion to the generalization of virtual reality, need
for force-feedback is recognized. The major objective of our
research is development of force-feedback devices for virtual
worlds. Following three categories of force displays are
developed: (1) Desktop Force Display A compact 9 degree-of-
freedom manipulator has been developed as a tactile input
device. The manipulator applies reaction forces to the fingers
and palm of the operator. The generated forces are calculated
from a solid model of the virtual space. The performance of
the system is exemplified in manipulation of virtual solid
objects such as a mockup for industrial design. (2) Texture
Display Active touch of an index finger is simulated by 2 DOF
small master manipulator. Uneven surface of a virtual object
is presented by controlling a direction of the reaction force
vector. (3) Force Display for Walkthrough This System creates
an illusion of walking through large scale virtual spaces such
as buildings or urban space. The walker wears omni directional
sliding devices on the feet, which generate feel of walking
while position of the walker is fixed in the physical world.
The system provides feeling of uneven surface of a virtual
space, such as a staircase. While the walker goes up or down
stairs, the feet are pulled by strings in order to apply
reaction force from the surface.
Psychophysical Analysis of the "Sensation of Reality" Induced by a Visual
Wide-Field Display
Toyohiko Hatada, Haruo Sakata, Hideo Kusaka (Auditory and Visual
Information Processing Research Group, Broadcasting Science Research
Labs at the Japan Broadcasting Corp (NHK), 1-10-11, Kinuta,
Setagaya-ku, Tokyo 157, Japan)
This report summarizes research on the psychological effects
induced by certain parameters of visual wide-field display.
These effects were studied by means of three different series
of experiments. The first experiment series evaluated the
subjectively induced tilt angle of the observer's coordinate
axes when presented with a tilted stimulus display pattern.
The second experiments were to determine the observer's eye and
head movements for information acquisition when presented with
a visual wide-field display. The third series of experiments
was to determine, by means of a subjective 7-step evaluation
scale, the viewing conditions under which the display induces a
sensation of reality in the observer. From the results of
these experiments sit was concluded that a visual display with
horizontal viewing angels ranging from 30 degrees to 100
degrees and virtual angles from 20 degrees to 80 degrees
produces psychological effects that give a sensation of
reality.
Undulation Detection of Virtual Shape by Fingertip using Force-Feedback Input
Device
Yukio Fukui and Makoto Shimojo (Industrial Products Research Institute,
1-1-4, Higashi, Tsukuba, Ibaraki 305 Japan)
To trace along the contours of virtual shapes, a force feedback
input device was developed. Two kinds of experiment using the
device were carried out. The visual and tactile sensitivities
of undulation on a curved contour were obtained. The visual
sensitivity was superior to the tactile one. However, tactile
sensing by the force feedback mechanism in addition to the
visual sensing brought about more reliable result.
--------------------------------END OF REPORT---------------------------
From: thinman@netcom.com (Lance Norskog)
Subject: TECH: Object collision & coprocessors
Date: Mon, 9 Mar 92 20:29:38 PST
The collision detection problem is one of the few "cleavage planes"
where VR software can be broken apart into co-processors.
Given a separate copy of the world model, the simulator processor
can run the simulation a small number of time ticks ahead,
processing the known changes (objects flying around in straight lines)
and saving the state for the unfinished computation of each time tick.
The deferred computation for time T would then be finished at time T
given the latest mouse or glove input. There is no great communication
bandwidth needed between the simulator and the main processor;
this last is the achilles heel of coprocessor-based systems.
The simulator can even apply predictive (Kalman) filtering to the
mouse and glove motions and tentatively do the collision prediction
anyway. If the prediction for time T is with N% of the actual input
for time T, you just go ahead and use the predicted version instead.
Another major cleavage plane is sound generation. Note that this
can also be generated and partially mixed down in advance using
the same techniques as the simulation processor.
Lance Norskog
From: douglas@atc.boeing.com
Subject: CONF: DIAC-92 Program, Berkeley, CA, May 1992
Date: Mon, 9 Mar 92 12:24:19 PST
Enclosed is the *final* DIAC-92 program. I hope that most of you will
join me for two days in *sunny* Berkeley this coming May. Also any
distribution of this program is very much appreciated! Thanks.
-- Doug
------------------------------------------------------------
Are computers part of the problem or ... ?
DIRECTIONS AND IMPLICATIONS OF ADVANCED COMPUTING
DIAC-92 Symposium Berkeley, California U.S.A
May 2 - 3, 1992 8:30 AM - 5:30 PM
The DIAC Symposia are biannual explorations of the social implications of
computing. In previous symposia such topics as virtual reality, high tech
weaponry, computers and education, affectionate technology, computing and
the disabled, and many others have been highlighted. Our fourth DIAC
Symposium, DIAC-92, offers insights on computer networks, computers in
the workplace, national R&D priorities and many other topics.
DIAC-92 will be an invigorating experience for anyone
with an interest in computers and society.
May 2, 1992
Morris E. Cox Auditorium
100 Genetics and Plant Biology Building (NW Corner of Campus)
University of California at Berkeley
8:30 - 9:00 Registration and continental breakfast
9:00 - 9:15 Welcome to DIAC-92
9:15 - 10:15 Opening Address
Building Communities with Online Tools
- John Coate, Director of Interactive Services, 101 OnLIne
When people log into online communication systems, they use new tools to
engage in an ancient activity - talking to each other. Systems become a
kind of virtual village. At the personal level they help people find
their kindred spirits. At the social level, they serve as an important
conduit of information, and become an essential element in a democratic
society.
10:15 - 10:45 Break
10:45 - 11:15 Presentation
Computer Networks in the Workplace: Factors Affecting the Use of
Electronic Communications
- Linda Parry and Robert Wharton, University of Minnesota
11:15 - 11:45 Presentation
Computer Workstations: The Occupational Hazard of the 21st Century
- Hal Sackman, California State University at Los Angeles
11:45 - 12:15 Presentation
MUDDING: Social Phenomena in Text-Based Virtual Realities
- Pavel Curtis, Xerox PARC
12:15 - 1:30 Lunch in Berkeley
1:30 - 2:00 Presentation
Community Memory: a Case Study in Community Communication
- Carl Farrington and Evelyn Pine, Community Memory
2:00 - 3:15 Panel Discussion
Funding Computer Science R&D
What is the current state of computer science funding in the U.S.? What
policy issues relate to funding? Should there be a civilian DARPA? How
does funding policy affect the universities? industry? Moderated by
Barbara Simons, Research Staff Member, IBM Almaden Research Center, San
Jose.
Panelists include
Mike Harrison, Computer Science Division, U.C. Berkeley
Gary Chapman, 21st Century Project Director, CPSR, Cambridge
Joel Yudken, Project on Regional and Industrial Economics, Rutgers University
3:15 - 3:45 Break
3:45 - 5:00 Panel Discussion
Virtual Society and Virtual Community
This panel looks at the phenomenon of virtual sociality. What are the
implications for society at large, and for network and interactive system
design in general? Moderated by Michael Travers, MIT Media Lab.
Panelists include:
Pavel Curtis, Xerox PARC
John Perry Barlow, Electronic Frontier Foundation (tentative)
Allucquere Rosanne Stone, University of California at San Diego
5:00 - 5:15 Day 1 Closing Remarks
May 3, 1992 Tolman Hall and Genetics and Plant Biology Building (NW
Corner of Campus) University of California at Berkeley
8:30 - 9:00 Registration and Continental Breakfast
Workshops* in Tolman Hall
The second day will consist of a wide variety of interactive workshops.
Many of the workshops will be working sessions in which output of some
kind will be produced.
9:00 - 10:40 Parallel Workshops I
Toward a Truly Global Network
- Larry Press, California State University, Dominguez Hills
Integration of an Ethics MetaFramework into the New CS Curriculum
- Dianne Martin, George Washington University
A Computer & Information Techologies Platform
- The Peace and Justice Working Group, CPSR/Berkeley
Hacking in the 90's
- Judi Clark
10:40 - 11:00 Break
11:00 - 12:40 Parallel Workshops II
Designing Computer Systems for Human (and Humane) Use
- Batya Friedman, Colby College
Examining Different Approaches to Community Access to Telecommunications
Systems
- Evelyn Pine
Third World Computing: Appropriate Technology for the Developed World?
- Philip Machanick, University of the Witwatersrand, South Africa
12:40 - 1:40 Lunch in Berkeley
1:40 - 3:20 Parallel Workshops III
Defining the Community Computing Movement: Some projects in and around
Boston
- Peter Miller, Somerville Community Computing Center
Future Directions in Developing Social Virtual Realities
- Pavel Curtis, Xerox PARC
Work Power, and Computers
- Viborg Andersen, University of California at Irvine
Designing Local Civic Networks: Principles and Policies
- Richard Civille, CPSR, Washington Office
3:20 - 3:40 Break
3:40 - 5:00 Plenary Panel Discussion
Work in the Computer Industry
This panel discussion is free to the public.
Morris E. Cox Auditorium 100 Genetics and Plant Biology Building (NW
Corner of UCB Campus)
Is work in the computer industry different from work in other industries?
What is the nature of the work we do? In what ways is our situation
similar to other workers in relation to job security, layoffs, and
unions? Moderated by Denise Caruso.
Tentative Panelists include
Dennis Hayes
John Markoff, New York Times
Lenny Siegal
5:00 - 5:15 Closing remarks
There will also be demonstrations of a variety of community networking and
Mudding systems during the symposium.
* Presentation times and topics may be revised slightly.
Sponsored by Computer Professionals for Social Responsibility
P.O. Box 717
Palo Alto, CA 94301
DIAC-92 is co-sponsored by the American Association for Artificial
Intelligence, and the Boston Computer Society Social Impact Group, in
cooperation with ACM SIGCHI and ACM SIGCAS. DIAC-92 is partially
supported by the National Science Foundtion under Grant No. DIR-9112572,
Ethics and Values Studies Office.
CPSR is a non-profit, national organization of computer professionals
concerned about the social implications of computing technologies in the
modern world. Since its founding in 1983, CPSR has achieved a strong
international reputation. CPSR has over 2500 members nationwide with
chapters in over 20 cities.
If you need additional information please contact Doug Schuler,
206-865-3832 (work) or 206-632-1659 (home), or Internet
dschuler@cs.washington.edu.
--- DIAC-92 Registration ---
Proceedings will be distributed at the symposium and are also available by
mail.
Send completed form with check or money order to:
DIAC-92 Registration
P.O. Box 2648
Sausalito, CA, 94966
USA
Name _______________________________________________________________
Address: ____________________________________________________________
City: ____________________ State: _______ Zip: _____________________
Electronic mail: ____________________________________________________
Symposium registration:
CPSR Member $40 __ Student $25 __
(or AAAI, BCS, ACM SIGCAS, ACM SIGCHI)
Non-member $50 __ Proceedings Only $20 __
New CPSR Membership Proceedings Only (foreign)
$25 __
(includes DIAC-92 Registration) $80 __
Additional Donation __
One day registration:
CPSR Member $25 __ Student $15 __
(or AAAI, BCS, ACM SIGCAS, ACM SIGCHI)
Non-member $30 __
Total enclosed $ _______
From: winfptr@duticai.tudelft.nl (P.A.J. Brijs afst Charles v.d. Mast)
Subject: WHO: VR Resarch at the Delft U of Technology, Netherlands
Date: Mon, 9 Mar 92 11:53:47 MET
About a year ago the faculty of Industrial Design Engineering of
the Delft University of Technology in the Netherlands purchased
the Virtuality system of W-industries ltd. to do research on and
with Virtual Reality technologies and principles.
Recently a project was started in cooperation with the faculty
of Technical Mathematics and Informatics. Goal of this project
is to develop an operational prototype of a software package
which enables the user to interactively create and evaluate three
dimensional computermodels of productconcepts in a Virtual
Environment. Research will concentrate on the question whether
VE techniques can be used for Computer Aided Design in the
conceptualizing phase of the design process. Most important
features of this toolkit will be direct manipulation, real-time
interactive modelling and the use of intuitive design tools.
The ecological approach of perception theories will be studied
on its implementability in order to create a more functional
and convincing virtual design environment.
We would like to receive information about similar activities at
other universities and research institutes. Especially when they
concern multi-dimensional interfaces, interactive modelling and
perception theories with respect to virtual environments.
Are there any other institutes working with the Virtuality system
of W-industries?
Exchange of information might be interesting for both parties. It's
no use inventing the wheel twice.
Hoping to hear from you.
Peter Brijs
Tjeerd Hoek
---
Peter Brijs Delft University of Technology
Tjeerd Hoek faculties of
Industrial Design Engineering &
e-mail: hoek@dutiws.tudelft.nl Applied Mathematics and Informatics
---------------------------------------------------------------------------
\\ // ____ _____ _____
\\ // /____\ | | | ____\
\\ // // \\ | | || \\
\\// \\____// | | ||____// Virtual Environment for
\/ \____/ _|_|_ |_____/ Interactive Design
From: rwzobel@eos.ncsu.edu
Subject: WHO: Dan Vickers?
Date: Mon, 09 Mar 92 12:56:25 EST
Do you happen to have an address for, or know of Daniel Vickers? He worked
with Ivan Sutherland on the first fully functional HMD setup in 1970 at the
U. of Utah. (Rheingold's "Virtual Reality", p.106) My reason for asking
about him is that he developed a "wand" tool for interacting with a
virtual environment for his Ph.D (p. 110-111, and Ch.5 References on p.394).
Supposedly he has been at Lawrence Livermore National Labs since graduating
from Utah.
I would be very interested in getting a copy of his dissertation, but it
is apparently unpublished. Do you know of any similar research having to do
with tool creation and/or testing for use in virtual worlds?
Thanks-
Rick Zobel
NCSU
From: hsr4@vax.oxford.ac.uk
Subject: Re: TECH: VR and Fractals
Date: 9 Mar 92 18:43:09 GMT
Organization: Oxford University VAXcluster
In article <1992Feb28.163216.10474@u.washington.edu>, harry@harlqn.co.uk
(Harry Fearnhamm) writes:
> On 26 Feb 92 03:07:29 GMT, ariadne@cats.ucsc.edu (the Thread Princess) said:
>
>>I am not sure of the mathematical feasiblity of this, but has anyone
>>attempted the production of music via fractal patterns ? I'm thinking of the
>>legendary "soundscape" concept extolled by the makers of _musique concrete_.
>>Seems to me at least partially possible... use of Mandelbrots et alia to de-
>>fine qualities of "voice" and "melody"... synchronization to a graphic image.
>>Not sure if the result would be aesthetically pleasing, but it would be diff-
>>erent, and hey, people took to fractal-based visual art to the degree that I
>>see it on T-shirts.
>
> How about having an equivalence of luminousity, only using sound?
> That way, bits of a fractal image would be giving off `whispers' of
> sound, only in a way that reflects the underlying mathematical
> structure. So if you were to explore a 3D fractal, out in the `dark'
> areas you would hear a general whisper of sound (with some pretty
> complicated reverberation - or would the `spikiness' of the fractal
> surface tend to absorb sound like an anechoic chamber?), but as you
> approached a fractal boundary, you would be able to hear sounds whose
> tone/pitch/modulation/rhythm/whatever-parameter-you-wanted-to-vary
> would vary with the 3D structure around you. Such a setup would be
> computationally intensive, but with a sensible choice of variables,
> might just be achievable.
I won't get to attend, but on March 13th., as part of the Postgraduate Lecture
series, Dr R. Sherlaw-Johnson from the Department of Music will be talking on
'Composing with Fractals'. I'm not sure what use this information has, but it
does at least show that composition (if not production) is on the can-do list.
+--------------------------------------+-------------------------------------+
| Peter G. Q. Brooks HSR4@UK.AC.OX.VAX | Kokoo koko kokko kokoon! |
| Health Services Research Unit | Koko kokkoko ? |
| Dept of Public Health & Primary Care | Koko kokko. |
| University of Oxford | Marjukka Grover, The Guardian |
+--------------------------------------+-------------------------------------+
From: schmidtg@iccgcc.decnet.ab.com
Subject: SCI: SIRDS
Date: 9 Mar 92 12:52:24 EST
Hello,
I've recently been introduced to sirds (single image random dot stereograms)
and my initial reaction was WOW!. These are computer generated dot patterns
which have a steroscopic effect when properly focused on. Several of them
appear in the April issue of GAMES magazine. They are intriguing since they
are contained in a single image and do not require any additional devices
(3D glasses etc..) to view. It takes a bit of practice do develop the proper
focusing technique, however once attained it becomes easy (much like riding a
bicycle). Apparently, these images are produced by first generating a back-
ground of random black and white dots. Dimensional images are then created
by shifting the dots in an amount proportional to the desired depth illusion
to be created. I was able to create a simple sird by duplicating the random
dot patterns within a square against a random background.
Now a question, does anyone have any information on this technique or on
sirds in general? I would be interested in knowing the algorithm used to
generate these as I would like to create some more complex ones.
-- Greg
+===========================================================================+
| Greg Schmidt ---> schmidtg@iccgcc.decnet.ab.com |
+---------------------------------------------------------------------------+
|"People with nothing to hide have nothing to fear from O.B.I.T" |
| -- Peter Lomax |
+---------------------------------------------------------------------------+
| Disclaimer: None of my statements can absolutely be proven to be correct. |
+===========================================================================+
From: ncf@kepler.unh.edu (Nicholas C Fitanides)
Subject: Re: PHIL: Are we in a virtual world now?
Date: Mon, 9 Mar 1992 17:30:47 GMT
Organization: University of New Hampshire - Durham, NH
What worlds are "real" and what are "virtual" are basically so because of
how we define the terms "real" and "virtual". The Universe is defined by
our perceptions--some would even argue to the point that nothing exists
that we do not perceive.
A "real" world (like the one we live in), looks real enough to call it "real".
A "virtual" world (like the one we might try to live in), might look good
enough to some (with bad eyesight) to call it "real", but to most other
people, isn't "real", so we call it "virtual".
Now there might be a possibility that we are now living in a "virtual" world,
but we call it "real", so that obscures the definition of the term, effectively
rendering any distinction useless since the different terms may describe the
same thing.
We define these terms within our "real" world, so it's difficult for us to
tell if it's "real" without consulting an outside observer. If you can find
an unbiased outside observer (God, maybe?), then you might be able to tell if
we're in a "real" world or a "virtual" world.
If we are in a virtual world, then what are our "virtual" worlds? Hyper-
virtual? The terminology blurs distinction.
So, we call this world "real", because it _appears_ real.
And we call other things "virtual", because they _appear_ so.
We're on the inside, so we're stuck with our definitions. They work, so far,
so there's no need to alter the definitions.
If this is a "virtual" world, then perhaps we shall see the "real" one when
we leave this world (or universe), like when we "die".
...merrily, merrily, merrily, merrily,
life is but a dream. :)
-Nick
--
+---------------------------------------------+-----------------------------+
// Amiga is the best. Nick Fitanides .Internet: ncf@kepler.unh.edu .
\X/ You keep the rest. CS dude . .
+----------------------+----------------------+-----------------------------+
From: pawel@cs.ualberta.ca (Pawel Gburzynski)
Subject: CONF: Technologies for High-Bandwidth Applications, Boston, Sep 92
Date: 9 Mar 92 01:00:31 GMT
Organization: University of Alberta, Edmonton, Canada
Crossposted from news.announce.conferences.
CALL FOR PAPERS
===============
Technologies for High-Bandwidth Applications
============================================
This is a part of the SPIE Symposium on
Enabling Technologies for Multi-Media, Multi-Service Networks
Boston, MA, 8 - 11 September, 1992
==================================
The convergence of high-bandwidth-communication, fast-computing, and
ubiquitous-network technologies offers new opportunities that will
impact our lives and work. The aim of this new conference is to bring
the community together and provide one cohesive and high-quality forum
to discuss ways of expanding our technical abilities in these areas. We
expect that, through multidisciplinary interaction, recent advances
will be highlighted and interdependence among technologies that are
part of high-bandwidth applications will be identified.
The topics of interest include design, development, and utilization of
high-bandwidth technologies for new multimedia applications, together
with all aspects of management and control of such applications.
High-bandwidth communication, massive memory systems, and new
approaches to the development of input-output devices that match
bandwidth and resolution requirements demanded by users will be
discussed.
We believe that this program will appeal to researchers, application
developers, current and future users of multimedia systems,
consultants, vendors, and students venturing into this new and
challenging multidisciplinary field. The conference program will try
to address the basic challenges associated with multimedia, such as:
- technologies for the user interface
- schemes for data storage and management
- multimedia application formats
- management and control schemes for the multimedia environment
- emerging standards for high-bandwidth system components
Papers describing the ongoing activities are solicited on the
following and related topics:
* High-Bandwidth User Environments
- new concepts and technologies
- nontraditional interface technologies
- experiments in nonverbal conceptualization
- special applications, for example, user interfaces for the
handicapped
* Massive Memory Systems
- disk arrays
- optical disks
- experimental technologies
* New Directions in Media Programming
- computer music
- image synthesis
- voice synthesis
- graphic and image languages
- data compression technologies
- directing and synchronizing of multiple data streams
* Markets for High-Bandwidth Applications
- medical applications
- engineering applications
- education applications
- new commercial and research directions
* Large-Scale Data Repositories
- architectures and access protocols
- emerging standards
- applications (images, sound, and mixed media)
* Standardization Issues
- multivendor cooperation and integration
- status of the standardization activities
- legal aspects (copyright, software protection)
Conference chair:
=================
Jacek Maitan, Lockheed Palo Alto Research Lab.
Cochairs:
=========
V. Ralph Algazi, University of California at Davis
Samuel S. Coleman, Lawrence Livermore National Lab.
Program Committee:
==================
Glendon R. Diener, Stanford University
Robert Frankle, Failure Analysis Associates
Pawel Gburzynski, University of Alberta (Canada)
Brian M. Hendrickson, Rome Lab.
James. D. Hollan,
Louis M. Gomez, Bell Communications Research
Stephen R. Redfield, MCC
George R. Thoma, National Library of Medicine
IMPORTANT DATES:
================
Abstract Due Date: 9 March 1992
Camera-Ready Abstract Due Date: 20 July 1992
Manuscript Due Date: 10 August 1992
The abstract should consist of ca. 200 words and should contain enough detail
to clearly convey the approach and the results of the research. A short
biography of the author(s) should be included with the abstract.
Copyright to the manuscript is expected to be released for publication in the
conference Proceedings.
Abstracts can be sent to one of the following addresses:
Jacek Maitan
Lockheed Missiles and Space Company, Inc.
O/9150 B/251
3251 Hanover Street, Palo Alto, CA 94304 USA
E-mail: jmaitan@isi.edu
Fax: (415) 424-3115
Pawel Gburzynski
University of Alberta
Department of Computing Science
615 GSB
Edmonton, Alberta, CANADA T6G 2H1
E-mail: pawel@cs.ualberta.ca
Fax: (403) 492-1071
OE/FIBERS '92
SPIE, P.O.B. 10, Bellingham, WA 98227-0010 USA
Fax: (206) 647-1445
E-mail: spie@nessie.wwu.edu
CompuServe 71630,2177
========
From: rick@cs.arizona.edu (Rick Schlichting)
Subject: INDUSTRY: Kahaner Report: Tele-existence work at RCAST/U-Tokyo
Date: Mon, 9 Mar 92 22:19:59 MST
Crossposted from comp.research.japan (thanks, Rick):
[Dr. David Kahaner is a numerical analyst on sabbatical to the
Office of Naval Research-Asia (ONR Asia) in Tokyo from NIST. The
following is the professional opinion of David Kahaner and in no
way has the blessing of the US Government or any agency of it. All
information is dated and of limited life time. This disclaimer should
be noted on ANY attribution.]
[Copies of previous reports written by Kahaner can be obtained using
anonymous FTP from host cs.arizona.edu, directory japan/kahaner.reports.]
From: David K. Kahaner, ONR Asia [kahaner@cs.titech.ac.jp]
Re: Tele-existence work at RCAST/U-Tokyo (S. Tachi)
9 March 1992 This file is named "tachi.lab"
ABSTRACT. Tele-existence work at S. Tachi's University of Tokyo lab is
described.
INTRODUCTION.
In a recent report (19 Feb 1992, 3d-2-92.1) I described a paper by
Professor S. Tachi on tele-existence. His work seemed very interesting
and I decided to visit and look in person.
Professor Susumu Tachi
Research Center for Advanced Science and Technology
University of Tokyo
4-6-1 Komaba, Meguro-ku, Tokyo 153 Japan
Tel: +81-3-3481-4467, Fax: +81-3-3481-4469, -4580
Email: TACHI@TANSEI.CC.U-TOKYO.AC.JP
Prof Tachi moved to U-Tokyo several years ago from the Mechanical
Engineering Laboratory in Tsukuba. MEL is run by AIST (Agency of
Industrial Science and Technology), which is part of MITI. About ten
years ago Tachi also spent a year at MIT.
Tachi began our conversation by showing me a chart describing the
different threads of research that are now converging to be part of what
is currently called Artificial (or Virtual) Reality, AR (VR). Below I
have extracted the text and reorganized it to better fit in the format
of this report. Looking backward it is fairly obvious that these
projects shared many common elements, although it is doubtful if the
researchers themselves thought in these terms.
Virtual console
(mouse, 3D mouse, virtual window,...)
Real time interactive 3D computer graphics
(computer graphics, 3D CG,...)
Virtual products
(CAD, 3D CAD, interactive 3D CAD,...)
Cybernetic interface
(man-machine human interface,...)
Responsive environment
(3D video/holography and art, interactive video and art,...)
Real time interactive computer simulation
(computer simulation, real time computer simulation,...)
Communication with a sensation of presence
(telephone, tele-conference,...)
Tele-existence/telepresence
(teleoperation, telerobotics,...)
Research into some of these topics began as early as the 1960s, for
example with Ivan Sutherland's computer graphics projects. The early
1980s saw rapid growth due to work by Furness, Kruger, Sheridan, and
others. Currently in the US, centers of excellence are at the
University of Washington, MIT, and UNC in the academic world, NASA and
NOSC within the US government, and at several small companies that are
marketing products. (This list is meant to be suggestive, not
exhaustive.)
Tachi's work was also mentioned in my report on virtual reality, vr.991,
9 Oct 1991. That report references a conference held July 1991, on AR
and Tele-existence. The proceedings of this conference have just
appeared and I have one copy. It is entirely in English and I recommend
it highly to anyone interested in getting a quick survey of the current
research activities in Japan. The proceedings also contain an
interesting panel discussion titled, "Will VR Become a Reality?" Titles,
and abstracts from this conference are appended to this report.
Finally, note that a newsgroup, "sci.virtual-worlds" is operating. Tachi
showed me the list of recent postings, and it is clear that this is a
very active communication vehicle.
Last year, I wrote a report summarizing a survey from the Japanese
Economic Planning Agency on technology through the year 2010 (see,
2010.epa, 27 Sept 1991). VR was one of the topics discussed there. In
the complete Japanese report more explanatory detail was presented.
The EPA estimates that practical realization will be sometime around
2020 (this seems further downstream than I would estimate). EPA
continues "Regarding the comparison of various countries' R&D efforts at
the present juncture in the VR field, both Japan and the US are actively
working on research and development of the system. At this particular
point, however, the US is more actively pursuing the system research. In
the US, for instance, the aerospace industry's R&D projects and
national-level projects in support of the industry, are conducting
truly large-scale simulations and are constructing excellent databases.
"Key technologies requiring breakthrough will be those used in
development of 3D simulation models, supercomputer application
technology, and self-growth-database technology.
"One form of social constraint standing in the way of practical
utilization of VR will be the effects of its possible application to
transportation systems as a part of the social infrastructure [sic].
Moreover, it will be influenced by the general public's perception and
value judgments regarding the system. Economic constraints will affect
business's attempts to establish a market and to drive costs down.
Moreover, the need for securing technical specialists in the software
development field and the resultant shortage in R&D funds, as well as
other difficulties involved in recruiting R&D personnel must be dealt
with.
"VR's market scale is estimated as reaching approximately the 1 trillion
yen level. Accompanying this market, there will be an increase in the
number of related industrial firms' research labs to as many as 100.
Software applications fields will be extensive, and a large number of
R&D divisions will be pursuing product research and development targeted
for various social strata.
"Positive impacts created by VR on the industrial economy will include
the emergence of a new simulation industry (which probably will maintain
supercomputers), and revitalization of computer software-houses,
computer industries, entertainment industry, and the aerospace industry.
The secondary effects will be experienced by the information industry,
the general communication-related industry, the publishing industry,
newspaper and magazine businesses, and by TV and radio broadcasting.
"Negative impacts will be felt by industries which have tended to hold
on to hardware-oriented products."
The point of this note is not to summarize VR work generally, but only
to describe the specific directions being taken at one lab. Tachi uses
the term "Tele-existence" to denote the technology which enables a human
to have a real sensation of being at another place, and enabling him/her
to interact with the remote environment. The latter can be real or
artificial. This is clearly an extension of the sensation we have when we
use a joystick to move around the figures in a video-game. In the US,
the term is "Telepresence".
Related to this, is work to amplify human muscle power and sensing
capability by using machines, while keeping human dexterity and the
sensation of direct operation. This work stems at least from the 60s
with developments of exoskeletons that could be worn like a garment, yet
provide a safe but strong environment. Those projects were not
successful; damage to the garment would endanger the wearer. Further,
the technology at that time did not allow enough room for both the human
and the equipment needed to control the skeleton.
Another idea was that of teleoperation or supervisory control. In this,
a human master moves and a "slave" (robot) is synchronized.
Tele-existence extends this notion, in that the human really should have
the sensations that he/she is performing the functions of the slave.
This occurs if the operator is provided with a very rich sensation of
presence which the slave has acquired.
In Tachi's lab, the tele-existence master-slave system consists of a
master system with visual and auditory presence sensation, a computer
system for control, and an anthropomorphic slave robot mechanism with an
arm having 7 degrees of freedom and a locomotion mechanism (caterpillar
track). The operator's head, right arm, right hand and other motion
such as feet, are measured by a system attached to the master in real
time. This information is sent to four MS DOS computers (386x33MHz with
coprocessors), and each computer generates commands to the corresponding
position of the slave. A servo controller governs the motion of the
slave. A six axis force sensor on the wrist joint of the the slave
measures the force and torque exerted on contact with an object, and
this measured signal is fed back to the computer in charge of arm
control via A to D converters. Force exerted at the hand when grasping
an object is also measured by a force sensor installed on the link
mechanism on the hand, and fed back to the appropriate computer via
another A to D converter. A stereo visual and auditory input system is
mounted on the neck of the slave, and this is sent back to the master
and displayed on a stereo display system in the helmet of the operator.
Many of the characteristics of the robot are similar to that of a
human, for example the dimensions and arrangement of the degrees of
freedom, the motion range of each degree of freedom, and the speed of
movement, and motors are designed so that the appearance of the robot
arm resembles a human's.
Measured master movements are also sent to an Silicon Graphics Iris
workstation. This generates two shaded graphic images which are applied
to the 3D display via superimposers. Measured pieces of information on
the master's movement are used to change the viewing angle, distance to
the object, and condition between the object and the hand in real time.
The operator sees the 3D virtual environment in his/her view which
changes with movement. Interaction can be either with the real
environment which the robot observes or with the virtual environment
which the computer generates.
There are many details of the system that are carefully explained in
Tachi's papers dating back to the mid 1980s and need not be repeated here.
I wondered about the choice of computer systems. Tachi commented that he
preferred DOS to Unix for the control computers because DOS made it
easier to process real time interrupts. On the other hand if workstation
performance is high enough interrupts can be handled in "virtually" real
time. The SG is fast enough to to the needed graphics as long as the
operator does not move his/her head too rapidly. Clearly, workstation
performance is an important consideration in real time computer
graphics.
Tachi demonstrated the system by sitting on a special "barber chair",
putting on the large helmet, inserting his right arm into a movable,
sling containing a grasping device that approximates the robot's arm.
His left arm holds a joystick that controls locomotion, forward, back,
right, left, and rotation of the robot. Once so configured he proceeded to
make the robot stack a set of three small cubes.
I tried it next. The helmet is large and bulky, but is equipped with a
small fan so that there is good air circulation. It is also heavy, but
is carried on a link mechanism that cancels all gravitational forces,
but not inertia, somewhat like wearing the helmet underwater. The color
display (one for each eye) is composed of two six-inch LCDs (720x240
pixels). Resolution is good, better than I expected, but not crystal
clear. Tachi explained that humans obtain higher apparent resolution by
moving their heads when looking at objects, and that the same effect
works in the helmet. He also explained that the 3D view has the same
spatial relation as by direct observation (this is one place where
workstation performance is needed), and tuning of the system in this
area is one of the things that Tachi and his colleagues have been
working on for almost ten years. Tachi claims that operators can use the
system for several hours without tiring or nausea; I am not sure if I
could last that long. Nevertheless, I was able, first try to move the
robot to within grasping distance of the table, lift the three small
blocks and stack them without dropping any. Training time, zero.
This is one of the most advanced experiments of its kind in Japan.
Tachi's lab is very close to downtown Tokyo and would be easy to reach.
Please refer to the reports referenced above, as well as the references
below for further descriptions of this project.
"Tele-existence (I): Design and Evaluation of a Visual Display with
Sensation of Presence", S. Tachi, et al, in Proceedings of RoManSy'84,
Udine, Italy, June 26-29 1984.
"Tele-existence in Real World and Virtual World", S. Tachi, et al, in
ICAR'91, (Fifth International Conference on Advanced Robotics), 19-22
June, Pisa, Italy.
-----------------------------------------------------------------
International Conference on Artificial Reality and Tele-Existence
9-10 July 1991, Tokyo Japan
Titles, authors, and abstracts of papers
The International Conference on Artificial Reality and Tele-Existence
July 1-10, 1991
Telepresence and Artificial Reality: Some Hard Questions
Thomas B. Sheridan (Massachusetts Institute of Technology, Cambridge, MA
02139, USA)
This paper proposes three measurable physical variables which
determine telepresence and virtual presence. It discusses
several aspects of human performance which might be
(differentially) affected by these forms of presence. The
paper suggests that distortions or filters in the afferent and
efferent communication channels should be further tested
experimentally for their effects on both presence and
performance. Finally it suggests models by which to
characterize both kinematic and dynamic properties of the
human-machine interface and how they affect both sense of
presence and performance.
A Virtual Environment for the Exploration of Three Dimensional Steady Flows
Steve Bryson (Sterling Software, Inc.)
Creon Levit (NASA Ames Research Center)
We describe a recently completed implementation of a virtual
environment for the analysis of three dimensional steady
flowfields. The hardware consists of a boom-mounted, six
degree-of-freedom head position sensitive stereo CRT system for
display, a VPL dataglove (tm) for placement of tracer particles
within the flow, and a Silicon Graphics 320 VGX workstation for
computation and rendering. The flowfields that we visualize
using the virtual environment are velocity vector fields
defined on curvilinear meshes, and are the steady-state
solutions to problems in computational fluid dynamics. The
system is applicable to the visualization of other vector
fields as well.
Visual Thinking in Organizational Analysis
Charles E. Grantham (College of Professional Studies, University of San
Francisco, San Francisco, CA, USA)
The ability to visualize the relationships among elements of
large complex databases is a trend which is yielding new
insights into several fields. I will demonstrate the use of
'visual thinking' as an analytical tool to the analysis of
formal, complex organizations. Recent developments in
organizational design and office automation are making the
visual analysis of workflows possible. An analytical mental
model of organizational functioning can be built upon a
depiction of information flows among work group members.
The dynamics of organizational functioning can be described in
terms of six essential processes. Furthermore, each of these
sub-systems develop within a staged cycle referred to as an
enneagram model. Together these mental models present a visual
metaphor of healthy function in large formal organizations;
both in static and dynamic terms. These models can be used to
depict the 'state' of an organization at points in time by
linking each process to quantitative data taken from the
monitoring the flow of information in computer networks.
Virtual Reality for Home Game Machines
Allen Becker (President, Reflection Technology, Waltham, Massachusetts,
USA)
No abstract
What Should You Wear to an Artificial Reality?
Myron W Krueger (Artificial Reality Corporation, Box 786, Vernon, CT
06066)
No abstract
Bringing Virtual Worlds to the Real World: Toward A Global Initiative
Robert Jacobson (Human Interface Technology Laboratory, Washington
Technology Center, FJ-15, c/o University of Washington, Seattle, WA 98195
USA)
Virtual worlds technology promises to greatly expand both the
numbers of persons who use computers and the ways in which they
use capability of the technology to satisfy these needs.
Independent development of virtual worlds technology has been
the norm for at least three decades, with researchers and
developers working privately to build unique virtual-worlds
systems. The result has been redundancy and a slow pace of
improvement in the basic technology and its applications. This
paper proposes a "global initiative" to coordinate and to some
extent unify R&D activities around the world, the quicker to
satisfy an eager market (that may not, however, stay eager for
long) and meet genuine human needs.
Virtuality (tm) - The Worlds first production Virtual Reality Workstation
Jonathan D. Waldern (W. Industries Ltd., ITEC House, 26-28 Chancery
St., Leicester, LE1 5WD UK.)
The history of the development of W. Industries Virtuality (tm)
technology is covered. Virtuality (tm) technology comprises
three key low cost components that enable W. Industries to
offer Turnkey Virtual Reality products to its customers. The
key elements of the technology are discussed including the
Expality (tm) computer, Animette (tm) simulation software and
Visette (tm) visor. Virtuality products are housed in two
separate enclosures the Stand-Up (SU) and Sit-Down (SD). A
variety of tools such as the Sensor and Feedback gloves enable
user interaction with the virtual environments. Virtuality^tm
products are used in commercial, scientific and leisure
applications.
Tele-Existence - Toward Virtual Existence in Real and/or Virtual Worlds
Susumu Tachi (RCAST, The University of Tokyo, 4-6-1 Komaba, Meguro-ku,
Tokyo 153, Japan)
Tele-existence is a concept named for the technology which
enables a human being to have a real time sensation of being at
the place other than the place where he or she actually exists,
and is able to interact with the remote and/or virtual
environment. He or she can tele-exist in a real world where
the robot exists or in a virtual world which a computer has
generated. It is possible to tele-exist in a combined
environment of real and virtual. Artificial reality or virtual
reality is a technology which presents a human being a
sensation of being involved in a realistic virtual environment
other than the environment where he or she really exists, and
can interact with the virtual environment. Thus tele-existence
and artificial reality are essentially the same technology
expressed in different manners. In this paper, the concept of
tele-existence is considered, and an experimental tele-
existence system is introduced, which enables a human operator
to have the sensation of being in a remote real environment
where a surrogate robot exists and/or virtual environment
synthesized by a computer.
Visualization Tool Applications of Artificial Reality
Michitaka Hirose (Department of Mechano-Informatics, the University of
Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo 113, Japan)
In the early stages of the history of computer development,
information processing capabilities of computers were not very
large, and only very limited capabilities such as texthandling
were assigned for peripheral I/O devices. Since then, efforts
to make computers more powerful and intelligent have been
continually progressing. As a result, a new and serious
problem has arisen as to how to interact with the advanced
computational results. Namely, I/O or peripheral devices have
become the "bottle necks" of information processing. Thus
recently, the need for communication channel with wider band
widths between the human operator and the computer has arisen.
For example, in the near future, continuous media such as live
video or audio, will become dominant in information processing,
rather than discrete media such as text consist of Ascii codes.
Artificial reality research is aiming in the same direction, to
give computers a greater capability of displaying information
vividly and provide communication channels with very wide band
widths between the human operator and the computer.
Consequently, visualization is one of the most important
application fields of artificial reality.
Several artificial reality research projects focused on
visualization are being conducted in the Systems Engineering
Laboratory at University of Tokyo. In this paper, the major
projects are introduced. First, research to develop a novel
I/O device for interacting with virtual 3D space generated by
computer are introduced. Second, as one application, software
visualization research is introduced. The key point of this
research is how to give shape in virtual space to software
which has logical characteristics but originally no spatial
shape. Third, the capability of experiencing non-existent
environments through artificial reality is discussed. For
example, world with very low light speed wherein Einstein
effect is visible can be demonstrated. Finally, the importance
of artificial reality in the field of visualization is re-
emphasized.
Virtual Work Space for 3-Dimensional Modeling
Makoto Sato (Precision and Intelligence Laboratory, Tokyo Institute of
Technology, 4259 Nagatsuta-cho, Midori-ku, Yokohama 227, Japan)
To develop a human interface for three-dimensional modeling it
is necessary to construct a virtual work space where we can
manipulate object models directly, just as in real space. In
this paper, we propose a new interface device SPIDAR (SPace
Interface Device for Artificial Reality) for this virtual work
space. SPIDAR can measure the position of a finger and give
force feedback to a finger when it touches the virtual object.
We construct a virtual work space using SPIDAR. We make
experiments to show the effect of force feedback on the
accuracy of work in the virtual work space.
Force Display for Virtual Worlds
Hiroo Iwata (Institute of Engineering Mechanics, University of Tsukuba
(Tsukuba 305, Japan)
In proportion to the generalization of virtual reality, need
for force-feedback is recognized. The major objective of our
research is development of force-feedback devices for virtual
worlds. Following three categories of force displays are
developed: (1) Desktop Force Display A compact 9 degree-of-
freedom manipulator has been developed as a tactile input
device. The manipulator applies reaction forces to the fingers
and palm of the operator. The generated forces are calculated
from a solid model of the virtual space. The performance of
the system is exemplified in manipulation of virtual solid
objects such as a mockup for industrial design. (2) Texture
Display Active touch of an index finger is simulated by 2 DOF
small master manipulator. Uneven surface of a virtual object
is presented by controlling a direction of the reaction force
vector. (3) Force Display for Walkthrough This System creates
an illusion of walking through large scale virtual spaces such
as buildings or urban space. The walker wears omni directional
sliding devices on the feet, which generate feel of walking
while position of the walker is fixed in the physical world.
The system provides feeling of uneven surface of a virtual
space, such as a staircase. While the walker goes up or down
stairs, the feet are pulled by strings in order to apply
reaction force from the surface.
Psychophysical Analysis of the "Sensation of Reality" Induced by a Visual
Wide-Field Display
Toyohiko Hatada, Haruo Sakata, Hideo Kusaka (Auditory and Visual
Information Processing Research Group, Broadcasting Science Research
Labs at the Japan Broadcasting Corp (NHK), 1-10-11, Kinuta,
Setagaya-ku, Tokyo 157, Japan)
This report summarizes research on the psychological effects
induced by certain parameters of visual wide-field display.
These effects were studied by means of three different series
of experiments. The first experiment series evaluated the
subjectively induced tilt angle of the observer's coordinate
axes when presented with a tilted stimulus display pattern.
The second experiments were to determine the observer's eye and
head movements for information acquisition when presented with
a visual wide-field display. The third series of experiments
was to determine, by means of a subjective 7-step evaluation
scale, the viewing conditions under which the display induces a
sensation of reality in the observer. From the results of
these experiments sit was concluded that a visual display with
horizontal viewing angels ranging from 30 degrees to 100
degrees and virtual angles from 20 degrees to 80 degrees
produces psychological effects that give a sensation of
reality.
Undulation Detection of Virtual Shape by Fingertip using Force-Feedback Input
Device
Yukio Fukui and Makoto Shimojo (Industrial Products Research Institute,
1-1-4, Higashi, Tsukuba, Ibaraki 305 Japan)
To trace along the contours of virtual shapes, a force feedback
input device was developed. Two kinds of experiment using the
device were carried out. The visual and tactile sensitivities
of undulation on a curved contour were obtained. The visual
sensitivity was superior to the tactile one. However, tactile
sensing by the force feedback mechanism in addition to the
visual sensing brought about more reliable result.
--------------------------------END OF REPORT---------------------------
From: thinman@netcom.com (Lance Norskog)
Subject: TECH: Object collision & coprocessors
Date: Mon, 9 Mar 92 20:29:38 PST
The collision detection problem is one of the few "cleavage planes"
where VR software can be broken apart into co-processors.
Given a separate copy of the world model, the simulator processor
can run the simulation a small number of time ticks ahead,
processing the known changes (objects flying around in straight lines)
and saving the state for the unfinished computation of each time tick.
The deferred computation for time T would then be finished at time T
given the latest mouse or glove input. There is no great communication
bandwidth needed between the simulator and the main processor;
this last is the achilles heel of coprocessor-based systems.
The simulator can even apply predictive (Kalman) filtering to the
mouse and glove motions and tentatively do the collision prediction
anyway. If the prediction for time T is with N% of the actual input
for time T, you just go ahead and use the predicted version instead.
Another major cleavage plane is sound generation. Note that this
can also be generated and partially mixed down in advance using
the same techniques as the simulation processor.
Lance Norskog
From: douglas@atc.boeing.com
Subject: CONF: DIAC-92 Program, Berkeley, CA, May 1992
Date: Mon, 9 Mar 92 12:24:19 PST
Enclosed is the *final* DIAC-92 program. I hope that most of you will
join me for two days in *sunny* Berkeley this coming May. Also any
distribution of this program is very much appreciated! Thanks.
-- Doug
------------------------------------------------------------
Are computers part of the problem or ... ?
DIRECTIONS AND IMPLICATIONS OF ADVANCED COMPUTING
DIAC-92 Symposium Berkeley, California U.S.A
May 2 - 3, 1992 8:30 AM - 5:30 PM
The DIAC Symposia are biannual explorations of the social implications of
computing. In previous symposia such topics as virtual reality, high tech
weaponry, computers and education, affectionate technology, computing and
the disabled, and many others have been highlighted. Our fourth DIAC
Symposium, DIAC-92, offers insights on computer networks, computers in
the workplace, national R&D priorities and many other topics.
DIAC-92 will be an invigorating experience for anyone
with an interest in computers and society.
May 2, 1992
Morris E. Cox Auditorium
100 Genetics and Plant Biology Building (NW Corner of Campus)
University of California at Berkeley
8:30 - 9:00 Registration and continental breakfast
9:00 - 9:15 Welcome to DIAC-92
9:15 - 10:15 Opening Address
Building Communities with Online Tools
- John Coate, Director of Interactive Services, 101 OnLIne
When people log into online communication systems, they use new tools to
engage in an ancient activity - talking to each other. Systems become a
kind of virtual village. At the personal level they help people find
their kindred spirits. At the social level, they serve as an important
conduit of information, and become an essential element in a democratic
society.
10:15 - 10:45 Break
10:45 - 11:15 Presentation
Computer Networks in the Workplace: Factors Affecting the Use of
Electronic Communications
- Linda Parry and Robert Wharton, University of Minnesota
11:15 - 11:45 Presentation
Computer Workstations: The Occupational Hazard of the 21st Century
- Hal Sackman, California State University at Los Angeles
11:45 - 12:15 Presentation
MUDDING: Social Phenomena in Text-Based Virtual Realities
- Pavel Curtis, Xerox PARC
12:15 - 1:30 Lunch in Berkeley
1:30 - 2:00 Presentation
Community Memory: a Case Study in Community Communication
- Carl Farrington and Evelyn Pine, Community Memory
2:00 - 3:15 Panel Discussion
Funding Computer Science R&D
What is the current state of computer science funding in the U.S.? What
policy issues relate to funding? Should there be a civilian DARPA? How
does funding policy affect the universities? industry? Moderated by
Barbara Simons, Research Staff Member, IBM Almaden Research Center, San
Jose.
Panelists include
Mike Harrison, Computer Science Division, U.C. Berkeley
Gary Chapman, 21st Century Project Director, CPSR, Cambridge
Joel Yudken, Project on Regional and Industrial Economics, Rutgers University
3:15 - 3:45 Break
3:45 - 5:00 Panel Discussion
Virtual Society and Virtual Community
This panel looks at the phenomenon of virtual sociality. What are the
implications for society at large, and for network and interactive system
design in general? Moderated by Michael Travers, MIT Media Lab.
Panelists include:
Pavel Curtis, Xerox PARC
John Perry Barlow, Electronic Frontier Foundation (tentative)
Allucquere Rosanne Stone, University of California at San Diego
5:00 - 5:15 Day 1 Closing Remarks
May 3, 1992 Tolman Hall and Genetics and Plant Biology Building (NW
Corner of Campus) University of California at Berkeley
8:30 - 9:00 Registration and Continental Breakfast
Workshops* in Tolman Hall
The second day will consist of a wide variety of interactive workshops.
Many of the workshops will be working sessions in which output of some
kind will be produced.
9:00 - 10:40 Parallel Workshops I
Toward a Truly Global Network
- Larry Press, California State University, Dominguez Hills
Integration of an Ethics MetaFramework into the New CS Curriculum
- Dianne Martin, George Washington University
A Computer & Information Techologies Platform
- The Peace and Justice Working Group, CPSR/Berkeley
Hacking in the 90's
- Judi Clark
10:40 - 11:00 Break
11:00 - 12:40 Parallel Workshops II
Designing Computer Systems for Human (and Humane) Use
- Batya Friedman, Colby College
Examining Different Approaches to Community Access to Telecommunications
Systems
- Evelyn Pine
Third World Computing: Appropriate Technology for the Developed World?
- Philip Machanick, University of the Witwatersrand, South Africa
12:40 - 1:40 Lunch in Berkeley
1:40 - 3:20 Parallel Workshops III
Defining the Community Computing Movement: Some projects in and around
Boston
- Peter Miller, Somerville Community Computing Center
Future Directions in Developing Social Virtual Realities
- Pavel Curtis, Xerox PARC
Work Power, and Computers
- Viborg Andersen, University of California at Irvine
Designing Local Civic Networks: Principles and Policies
- Richard Civille, CPSR, Washington Office
3:20 - 3:40 Break
3:40 - 5:00 Plenary Panel Discussion
Work in the Computer Industry
This panel discussion is free to the public.
Morris E. Cox Auditorium 100 Genetics and Plant Biology Building (NW
Corner of UCB Campus)
Is work in the computer industry different from work in other industries?
What is the nature of the work we do? In what ways is our situation
similar to other workers in relation to job security, layoffs, and
unions? Moderated by Denise Caruso.
Tentative Panelists include
Dennis Hayes
John Markoff, New York Times
Lenny Siegal
5:00 - 5:15 Closing remarks
There will also be demonstrations of a variety of community networking and
Mudding systems during the symposium.
* Presentation times and topics may be revised slightly.
Sponsored by Computer Professionals for Social Responsibility
P.O. Box 717
Palo Alto, CA 94301
DIAC-92 is co-sponsored by the American Association for Artificial
Intelligence, and the Boston Computer Society Social Impact Group, in
cooperation with ACM SIGCHI and ACM SIGCAS. DIAC-92 is partially
supported by the National Science Foundtion under Grant No. DIR-9112572,
Ethics and Values Studies Office.
CPSR is a non-profit, national organization of computer professionals
concerned about the social implications of computing technologies in the
modern world. Since its founding in 1983, CPSR has achieved a strong
international reputation. CPSR has over 2500 members nationwide with
chapters in over 20 cities.
If you need additional information please contact Doug Schuler,
206-865-3832 (work) or 206-632-1659 (home), or Internet
dschuler@cs.washington.edu.
--- DIAC-92 Registration ---
Proceedings will be distributed at the symposium and are also available by
mail.
Send completed form with check or money order to:
DIAC-92 Registration
P.O. Box 2648
Sausalito, CA, 94966
USA
Name _______________________________________________________________
Address: ____________________________________________________________
City: ____________________ State: _______ Zip: _____________________
Electronic mail: ____________________________________________________
Symposium registration:
CPSR Member $40 __ Student $25 __
(or AAAI, BCS, ACM SIGCAS, ACM SIGCHI)
Non-member $50 __ Proceedings Only $20 __
New CPSR Membership Proceedings Only (foreign)
$25 __
(includes DIAC-92 Registration) $80 __
Additional Donation __
One day registration:
CPSR Member $25 __ Student $15 __
(or AAAI, BCS, ACM SIGCAS, ACM SIGCHI)
Non-member $30 __
Total enclosed $ _______
From: winfptr@duticai.tudelft.nl (P.A.J. Brijs afst Charles v.d. Mast)
Subject: WHO: VR Resarch at the Delft U of Technology, Netherlands
Date: Mon, 9 Mar 92 11:53:47 MET
About a year ago the faculty of Industrial Design Engineering of
the Delft University of Technology in the Netherlands purchased
the Virtuality system of W-industries ltd. to do research on and
with Virtual Reality technologies and principles.
Recently a project was started in cooperation with the faculty
of Technical Mathematics and Informatics. Goal of this project
is to develop an operational prototype of a software package
which enables the user to interactively create and evaluate three
dimensional computermodels of productconcepts in a Virtual
Environment. Research will concentrate on the question whether
VE techniques can be used for Computer Aided Design in the
conceptualizing phase of the design process. Most important
features of this toolkit will be direct manipulation, real-time
interactive modelling and the use of intuitive design tools.
The ecological approach of perception theories will be studied
on its implementability in order to create a more functional
and convincing virtual design environment.
We would like to receive information about similar activities at
other universities and research institutes. Especially when they
concern multi-dimensional interfaces, interactive modelling and
perception theories with respect to virtual environments.
Are there any other institutes working with the Virtuality system
of W-industries?
Exchange of information might be interesting for both parties. It's
no use inventing the wheel twice.
Hoping to hear from you.
Peter Brijs
Tjeerd Hoek
---
Peter Brijs Delft University of Technology
Tjeerd Hoek faculties of
Industrial Design Engineering &
e-mail: hoek@dutiws.tudelft.nl Applied Mathematics and Informatics
---------------------------------------------------------------------------
\\ // ____ _____ _____
\\ // /____\ | | | ____\
\\ // // \\ | | || \\
\\// \\____// | | ||____// Virtual Environment for
\/ \____/ _|_|_ |_____/ Interactive Design
From: rwzobel@eos.ncsu.edu
Subject: WHO: Dan Vickers?
Date: Mon, 09 Mar 92 12:56:25 EST
Do you happen to have an address for, or know of Daniel Vickers? He worked
with Ivan Sutherland on the first fully functional HMD setup in 1970 at the
U. of Utah. (Rheingold's "Virtual Reality", p.106) My reason for asking
about him is that he developed a "wand" tool for interacting with a
virtual environment for his Ph.D (p. 110-111, and Ch.5 References on p.394).
Supposedly he has been at Lawrence Livermore National Labs since graduating
from Utah.
I would be very interested in getting a copy of his dissertation, but it
is apparently unpublished. Do you know of any similar research having to do
with tool creation and/or testing for use in virtual worlds?
Thanks-
Rick Zobel
NCSU
From: hsr4@vax.oxford.ac.uk
Subject: Re: TECH: VR and Fractals
Date: 9 Mar 92 18:43:09 GMT
Organization: Oxford University VAXcluster
In article <1992Feb28.163216.10474@u.washington.edu>, harry@harlqn.co.uk
(Harry Fearnhamm) writes:
> On 26 Feb 92 03:07:29 GMT, ariadne@cats.ucsc.edu (the Thread Princess) said:
>
>>I am not sure of the mathematical feasiblity of this, but has anyone
>>attempted the production of music via fractal patterns ? I'm thinking of the
>>legendary "soundscape" concept extolled by the makers of _musique concrete_.
>>Seems to me at least partially possible... use of Mandelbrots et alia to de-
>>fine qualities of "voice" and "melody"... synchronization to a graphic image.
>>Not sure if the result would be aesthetically pleasing, but it would be diff-
>>erent, and hey, people took to fractal-based visual art to the degree that I
>>see it on T-shirts.
>
> How about having an equivalence of luminousity, only using sound?
> That way, bits of a fractal image would be giving off `whispers' of
> sound, only in a way that reflects the underlying mathematical
> structure. So if you were to explore a 3D fractal, out in the `dark'
> areas you would hear a general whisper of sound (with some pretty
> complicated reverberation - or would the `spikiness' of the fractal
> surface tend to absorb sound like an anechoic chamber?), but as you
> approached a fractal boundary, you would be able to hear sounds whose
> tone/pitch/modulation/rhythm/whatever-parameter-you-wanted-to-vary
> would vary with the 3D structure around you. Such a setup would be
> computationally intensive, but with a sensible choice of variables,
> might just be achievable.
I won't get to attend, but on March 13th., as part of the Postgraduate Lecture
series, Dr R. Sherlaw-Johnson from the Department of Music will be talking on
'Composing with Fractals'. I'm not sure what use this information has, but it
does at least show that composition (if not production) is on the can-do list.
+--------------------------------------+-------------------------------------+
| Peter G. Q. Brooks HSR4@UK.AC.OX.VAX | Kokoo koko kokko kokoon! |
| Health Services Research Unit | Koko kokkoko ? |
| Dept of Public Health & Primary Care | Koko kokko. |
| University of Oxford | Marjukka Grover, The Guardian |
+--------------------------------------+-------------------------------------+
From: schmidtg@iccgcc.decnet.ab.com
Subject: SCI: SIRDS
Date: 9 Mar 92 12:52:24 EST
Hello,
I've recently been introduced to sirds (single image random dot stereograms)
and my initial reaction was WOW!. These are computer generated dot patterns
which have a steroscopic effect when properly focused on. Several of them
appear in the April issue of GAMES magazine. They are intriguing since they
are contained in a single image and do not require any additional devices
(3D glasses etc..) to view. It takes a bit of practice do develop the proper
focusing technique, however once attained it becomes easy (much like riding a
bicycle). Apparently, these images are produced by first generating a back-
ground of random black and white dots. Dimensional images are then created
by shifting the dots in an amount proportional to the desired depth illusion
to be created. I was able to create a simple sird by duplicating the random
dot patterns within a square against a random background.
Now a question, does anyone have any information on this technique or on
sirds in general? I would be interested in knowing the algorithm used to
generate these as I would like to create some more complex ones.
-- Greg
+===========================================================================+
| Greg Schmidt ---> schmidtg@iccgcc.decnet.ab.com |
+---------------------------------------------------------------------------+
|"People with nothing to hide have nothing to fear from O.B.I.T" |
| -- Peter Lomax |
+---------------------------------------------------------------------------+
| Disclaimer: None of my statements can absolutely be proven to be correct. |
+===========================================================================+
From: ncf@kepler.unh.edu (Nicholas C Fitanides)
Subject: Re: PHIL: Are we in a virtual world now?
Date: Mon, 9 Mar 1992 17:30:47 GMT
Organization: University of New Hampshire - Durham, NH
What worlds are "real" and what are "virtual" are basically so because of
how we define the terms "real" and "virtual". The Universe is defined by
our perceptions--some would even argue to the point that nothing exists
that we do not perceive.
A "real" world (like the one we live in), looks real enough to call it "real".
A "virtual" world (like the one we might try to live in), might look good
enough to some (with bad eyesight) to call it "real", but to most other
people, isn't "real", so we call it "virtual".
Now there might be a possibility that we are now living in a "virtual" world,
but we call it "real", so that obscures the definition of the term, effectively
rendering any distinction useless since the different terms may describe the
same thing.
We define these terms within our "real" world, so it's difficult for us to
tell if it's "real" without consulting an outside observer. If you can find
an unbiased outside observer (God, maybe?), then you might be able to tell if
we're in a "real" world or a "virtual" world.
If we are in a virtual world, then what are our "virtual" worlds? Hyper-
virtual? The terminology blurs distinction.
So, we call this world "real", because it _appears_ real.
And we call other things "virtual", because they _appear_ so.
We're on the inside, so we're stuck with our definitions. They work, so far,
so there's no need to alter the definitions.
If this is a "virtual" world, then perhaps we shall see the "real" one when
we leave this world (or universe), like when we "die".
...merrily, merrily, merrily, merrily,
life is but a dream. :)
-Nick
--
+---------------------------------------------+-----------------------------+
// Amiga is the best. Nick Fitanides .Internet: ncf@kepler.unh.edu .
\X/ You keep the rest. CS dude . .
+----------------------+----------------------+-----------------------------+
From: pawel@cs.ualberta.ca (Pawel Gburzynski)
Subject: CONF: Technologies for High-Bandwidth Applications, Boston, Sep 92
Date: 9 Mar 92 01:00:31 GMT
Organization: University of Alberta, Edmonton, Canada
Crossposted from news.announce.conferences.
CALL FOR PAPERS
===============
Technologies for High-Bandwidth Applications
============================================
This is a part of the SPIE Symposium on
Enabling Technologies for Multi-Media, Multi-Service Networks
Boston, MA, 8 - 11 September, 1992
==================================
The convergence of high-bandwidth-communication, fast-computing, and
ubiquitous-network technologies offers new opportunities that will
impact our lives and work. The aim of this new conference is to bring
the community together and provide one cohesive and high-quality forum
to discuss ways of expanding our technical abilities in these areas. We
expect that, through multidisciplinary interaction, recent advances
will be highlighted and interdependence among technologies that are
part of high-bandwidth applications will be identified.
The topics of interest include design, development, and utilization of
high-bandwidth technologies for new multimedia applications, together
with all aspects of management and control of such applications.
High-bandwidth communication, massive memory systems, and new
approaches to the development of input-output devices that match
bandwidth and resolution requirements demanded by users will be
discussed.
We believe that this program will appeal to researchers, application
developers, current and future users of multimedia systems,
consultants, vendors, and students venturing into this new and
challenging multidisciplinary field. The conference program will try
to address the basic challenges associated with multimedia, such as:
- technologies for the user interface
- schemes for data storage and management
- multimedia application formats
- management and control schemes for the multimedia environment
- emerging standards for high-bandwidth system components
Papers describing the ongoing activities are solicited on the
following and related topics:
* High-Bandwidth User Environments
- new concepts and technologies
- nontraditional interface technologies
- experiments in nonverbal conceptualization
- special applications, for example, user interfaces for the
handicapped
* Massive Memory Systems
- disk arrays
- optical disks
- experimental technologies
* New Directions in Media Programming
- computer music
- image synthesis
- voice synthesis
- graphic and image languages
- data compression technologies
- directing and synchronizing of multiple data streams
* Markets for High-Bandwidth Applications
- medical applications
- engineering applications
- education applications
- new commercial and research directions
* Large-Scale Data Repositories
- architectures and access protocols
- emerging standards
- applications (images, sound, and mixed media)
* Standardization Issues
- multivendor cooperation and integration
- status of the standardization activities
- legal aspects (copyright, software protection)
Conference chair:
=================
Jacek Maitan, Lockheed Palo Alto Research Lab.
Cochairs:
=========
V. Ralph Algazi, University of California at Davis
Samuel S. Coleman, Lawrence Livermore National Lab.
Program Committee:
==================
Glendon R. Diener, Stanford University
Robert Frankle, Failure Analysis Associates
Pawel Gburzynski, University of Alberta (Canada)
Brian M. Hendrickson, Rome Lab.
James. D. Hollan,
Louis M. Gomez, Bell Communications Research
Stephen R. Redfield, MCC
George R. Thoma, National Library of Medicine
IMPORTANT DATES:
================
Abstract Due Date: 9 March 1992
Camera-Ready Abstract Due Date: 20 July 1992
Manuscript Due Date: 10 August 1992
The abstract should consist of ca. 200 words and should contain enough detail
to clearly convey the approach and the results of the research. A short
biography of the author(s) should be included with the abstract.
Copyright to the manuscript is expected to be released for publication in the
conference Proceedings.
Abstracts can be sent to one of the following addresses:
Jacek Maitan
Lockheed Missiles and Space Company, Inc.
O/9150 B/251
3251 Hanover Street, Palo Alto, CA 94304 USA
E-mail: jmaitan@isi.edu
Fax: (415) 424-3115
Pawel Gburzynski
University of Alberta
Department of Computing Science
615 GSB
Edmonton, Alberta, CANADA T6G 2H1
E-mail: pawel@cs.ualberta.ca
Fax: (403) 492-1071
OE/FIBERS '92
SPIE, P.O.B. 10, Bellingham, WA 98227-0010 USA
Fax: (206) 647-1445
E-mail: spie@nessie.wwu.edu
CompuServe 71630,2177
Comments
Post a Comment