|
Stanford Interactive Workspaces Project
|
July 18, 2001 - The
Event Heap, part of the iRoom infrastructure, is now available for download
on our software page.
The Interactive Workspaces Project at Stanford is exploring new possibilities
for people to work together in technology-rich spaces with computing and interaction
devices on many different scales. It is made up of faculty and students from
the areas of Graphics. Human-Computer Interaction, Networking,
and Databases. We have built experimental
hardware and software testbeds that include large, high-resolution wall-mounted
and tabletop displays (which we call Interactive Murals and Interactive
Tables), as well as small, personal mobile computing devices such as laptops
and PDAs connected through a wireless LAN. Specialized input and output devices
such as LCD-tablets, laser pointer trackers, microphone arrays and pan-and-tilt
cameras are also present in the environment. The research builds on previous
work in graphics architectures, scientific visualization, ubiquitous computing,
multimodal interaction, computer-supported cooperative work, and distributed
system architectures. The environment is being developed through building several
applications projects, in collaboration with
faculty from a number of other departments, including Civil Engineering, Medical
Informatics, and the Stanford Learning Lab. In this reserach, we focus both
on the new software and hardware technologies and the realities of human interaction
and work in a variety of application domains.
Motivation
Most of today's computing environments are designed to support
the interaction between one person and one computer. The user
sits at workstation or laptop, or holds a PDA, focusing on a single
device at a time (even if there are several around and they are
linked and synchronized). Collaboration is accomplished over the
network, using email, shared files, or in some cases explicitly
designed "groupware". In non-computerized work settings,
on the other hand, people interact in a rich environment that
includes information from many sources (paper, whiteboards, computers,
physical models, etc). and are able to use these simultaneously
and move among them flexibly and quickly. The few existing integrated
multi-device computer environments today tend to be highly specialized
and based on application-specific software.
We are designing and experimenting with multi-device, multi-user
environments based on a new architecture that makes it easy to
create and add new display and input devices, to move work of
all kinds from one computing device to another, and to support
and facilitate group interactions. In the same way that today's
standard operating systems make it feasible to write single-workstation
software that makes use of multiple devices and networked resources,
we are constructing a higher level operating system for the world
of ubiquitous computing.
We have chosen to focus our current work an augmented dedicated
space (a meeting room, rather than an individual's office or home,
or a tele-connected set of spaces), and to concentrate on task-oriented
work (rather than entertainment, personal communication, or ambient
information). In the future, it is likely that , technology of
the kind we are using will become cheap enough to be part of the
common living space for many people, and we anticipate that the
infrastructure we are building will be put to a wider range of
uses. Also we start with the recognition that the environments
we build are situated in a larger context, in which people work
individually at workstations, in remote locations with mobile
devices, in person without computer augmentation, etc. The interactive
workspace is not a replacement for these other ways of working,
but an addition to them, enhancing high-information, high-interaction
collaborative activities.
Experimental Facilities
We have constructed two laboratory facilities in which to pursue
the research:
- The interactive mural (located
in the Gates 3rd floor graphics lab) is a large, high-resolution,
tiled display, constructed using 8 projectors connected either to
a SGI dual-pipe IR or a cluster of 8 myrinet-connected PCs with
NVIDIA graphics cards. We have designed and implemented a scalable graphics
library that provides a single virtual display abstraction to the
programmer, even though the physical
display is driven by multiple overlapping projectors, multiple
independent graphics accelerators and multiple processors.
- The interactive room (located
in Gates B23) is a testbed for combining multiple devices in
an integrated environment.
The room is equipped with two large wall-based displays.
A 18' by 4.5' rear-projection blackboard-like display
that with touch-sensitive SmartBoards occupies the side of the room
and a presentation screen is the front of the room.
The center of the room is occupied by a bottom-projected,
conference room table display.
A small PC cluster consisting of roughly 12-14 machines (soon to be
enlarged with another 32-node cluster) hosts the various input and output
devices and provides other services in the room.
Supporting these physical environments are a number of software
infrastructure components:
- Event Heap System: Events in the workspace are
communicated from one device and process to another using an
simple communication mechanism based on a tuplespace.
The advantage of this approach, which we call an Event Heap, is that it
allows dynamic reconfiguration and
loose coupling (producers and consumers of events do not need
to know anything about each other except the format of appropriate
events in the heap). The current Event Heap is built upon the
T-Spaces system,
which provides a shared blackboard, or tuplespace, for posting and
receiving events.
- Mural Graphics and Toolkits: The Mural server provides a virtual
display that supports OpenGL graphics
over a tiled collection of actual displays.
OpenGL programs can run on the Mural without modification
via the use of a drop-in DLL.
We are also developing toolkits that will
provide higher-level support for interactive visualization on
large displays.
There are currently two low-level toolkits for programmers
(
Millefeuille and the Visual Object Toolkit).
In order to allow domain-oriented users to write simple applications
without getting into the depths of the toolkit programming interfaces,
we are developing a high-level XML-based scripting language for display
and interaction called
X2D.
You can think of it as a modern descendant of
HyperCard or Director/Flash - providing objects, layout facilities,
and a simple scripting language.
- XML-based Workspace Data Store or Memory: The interactive workspace
is a context for ongoing project work, which involves large dynamic
information collections, including documents, images, 3-D models,
application-specific domain models, etc. As work goes on, these
information components are imported (from the network or on devices
such as laptops brought into the workspace), created, modified,
shared, displayed, etc. They are linked into meaningful collections,
such as those associated with a project, those that are active
in the workspace at the end of a session (to be restored later),
those under the control of a particular individual, etc. In a
conventional system, the workstion operating system maintains
the appropriate data. In a multi-device multi-user system, this
coordinating function is a separate element in the workspace
configuration. We have chosen to build it using a semi-structured
database system, LORE,
which uses XML structures as a basis. The data store will not
contain all of the data (e.g., the images, 3D models, etc.),
but is more like a catalog for keeping track of what is relevant
to the workspace, with pointers (URLs) to outside resources as
appropriate.
- XML Transformations and Path Servers:
In order to combine and build
applications easily, we use XML as a standard data format for
information interchange among components. Many people are working
in other projects to develop XML-based standards for information
such as graphics (SVG), molecular structures,
buildings and CA drawings, etc. We will use a general XML-to-XML conversion
engine (possibly Ricoh's PIA),
that is available for any component to use.
We are currently developing a method for automatically assembling
a path of transformations from XML data sources to the workspace
display servers.
- Overface: We are developing a set of interactive tools for
making it easy to use legacy applications within the workspace, as well
as support novel interaction methods. The major motivation for the
overface is that interactive applications based on desktop GUIs are
hard to use on small or large-format displays with different types of
input devices. At one level we are developing
virtual application controllers that allow legacy applications to
be controlled in new ways. With a virtual application controller part of an
existing interface can be exported to a remote device such as a PDA,
or adapted to a novel interface such as a gesture-based interface.
The overface also provides a unified set of commands for all the different
types of devices in the workspace.
Application Projects
The technologies for interactive workspaces are being developed
in conjunction with a number of application projects that have
high potential to take advantage of the capabilities we are developing.
In these project we are working with research groups from around
the University.
- Ribosome structure research
(with Russ Altman, Medical Informatics). This project integrates
visualazations 2-dimensional secondary structure diagrams, sequence
information, 3-D models and research data, for scientists to
work on determining the structure of the E-coli Ribosome.
- Construction project
management (with Martin Fischer, Civil Engineering) This
project facilitates the design and management of work on complex
projects, using the workspace for interactive negotiation and
modification of plans, 3D models, 4D models, and corrolary construction
information.
- Interactive learning
(with Stanford Learning Laboratory) We are exploring ways
to use workspaces to enhance teaching in a variety of subjects
that are being explored by the learning laboratory.
- 3D
Medical Imaging (with Sandy
Napel, Department of Radiology) This project is exploring
the integration of high-density scan images with 3-D volumetric
rendering and analysis, to enhance the ability of radiologists
to assess patients and communicate results to other physicians.
- Computer systems visualization
(with Mendel Rosenblum, Computer Science) This project is
developing new uses of visualization to aid in the design and
debugging of complex computer and network systems.
Major Research Thrusts
- A scalable, distributed display architecture that can
provide a single virtual display abstraction to the programmer,
even though the physical display is driven by multiple overlapping
projectors, multiple independent graphics accelerators and multiple
processors. Another goal of this research is to flexibly allocate
rendering services to different displays.
- New architectures for the integration
of multiple people and devices in interactive spaces. This
includes the Event Heap model for integrating events from multiple
devices in away that allows dynamic configuration and is robust
in the face of equipment failure, removal, addition and reconfiguration.
A key research area is the way to manage the tradeoffs between
flexibility and efficiency, especially for events that require
very short latency in order to provide effective interaction.
We need to support a range of capacities (e.g., high bandwidth
for image transfer, low latency for real-time control devices).
We also are experimenting with ways of using higher-level standards,
such as XML for managing information flow across devices and
applications.
- Interaction styles and assoicated toolkits that are
appropriate for large displays, multiple devices, and multiple
users. These include the direct interaction affordances (e.g.,
standard menu bars are not applicable to a large multi-screen
display) and the framework for maintaining and using contexts
(e.g., user-specific parameters to be applied when the user is
working across devices). A key goal is designing mechanisms to
facilitate collaborative work by people working together in the
space.
- A generalized interaction architecture that is based
on a user-centered interaction
model. As the workspace develops in the future, we will explore
more natural kinds of interaction, in which perception devices
(such as cameras and microphones) are able to interpret the actions
of people in the workspace as part of the control paradigm. This
will require multimodal integration, in which more than one modality
or device is used in carrying out a single activity from the
user's point of view.
Faculty
Consultants and staff working on the project
Susan Shepard,
Maureen Stone
Students working on the project
Henry Berg,
Ian Buck,
Francois Gumbretiere, Emre Kiciman, Manali Holankar,
Greg Humphreys,
Brad Johanson,
Brian Lee, Kathleen Liston, Shankar Ponnekanti,
Richard Salvador, Caesar Sengupta, Rito Trevino, Allison Waugh
Support
The Interactive Workspaces project has been supported by a variety
of grants and equipment donations.
The scalable graphics and visualization research and the construction of
the Interactive Room has been supported by the Department of Energy
under the Data and Visualization Corridors program.
The HCI research has been supported by
Interval Research,
IBM
and
Philips.
Equipment and software donations have been provided by
EFI (E-beam trackers),
IBM (server machines),
InFocus (projectors),
Intel (server machines),
Microsoft (PDAs),
and
ParaGraph
(handwriting recognition software).
Publications
- ICrafter: A Service Framework for Ubiquitous Computing Environments. Shankar R. Ponnekanti, Brian Lee, Armando Fox, Pat Hanrahan, and Terry Winograd, UBICOMP 2001, Atlanta, Georgia.
- Multibrowsing: Moving Web Content across Multiple Displays. Brad Johanson, Shankar R. Ponnekanti, Caesar Sengupta, and Armando Fox, UBICOMP 2001, Atlanta, Georgia.
- Integrating Information Appliances into an Interactive Workspace, Armando Fox, Brad Johanson, Pat Hanrahan, and Terry Winograd, IEEE CG&A, May/June 2000
- A Human-Centered Interaction Architecture,
Terry Winograd, unpublished draft.
- Visual Instruments for an Interactive Mural,
Terry Winograd and Francois Guimbretiere, CHI99
- A Distributed
Graphics System for Large Tiled Displays, Greg Humphreys and Pat Hanrahan
Last modified: June 1, 2000 by Shepard