1) Introduction and overview

1.1) The evolution of human computer interfaces

1.2) What is a virtual environment ?

1.3) Definition of navigation

1.4) Definition of interaction

1.5) Structure of virtual environments

1.6) Overview of the "Mapper"

1.6.1) Layers

1.6.2) Mapping strategy

1.6.3) Related work

1.7) Navigation and interaction in virtual worlds - an overview


2) Input (and output) devices

2.1) Commonly used input devices

2.1.1) Keyboard

2.1.2) Mouse

2.1.3) Trackball

2.1.4) Joystick

2.1.5) SpaceBall

2.1.6) Tracker

2.1.7) Data Glove

2.2) Usability in virtual environments

2.3) Classifying the input devices

2.3.1) Formal classification

2.3.2) Semantic classification


3) Device layer

3.1) Mapping of the device classes


4) Interaction layer

4.1) Data types in generic human-computer interaction

4.2) Data classes of the interaction layer

4.3) Mapping strategies

4.3.1) Overview

4.3.2) Mapping an interaction class

4.3.3) Basic concept of the "core" module

4.3.4) The core module

4.3.5) Device components

4.3.6) Request atoms

4.3.7) The Device Preference (DP) lists

4.3.8) The Device Component Preference (DCP) lists

4.4) Processing the interaction data classes

4.4.1) Fixed value components

4.4.2) Variable value components


5) Navigation layer

5.1) Basics of computer navigation in virtual environments

5.2) The Avatar

5.3) Presence = Avatar + Camera

5.4) Design consideration

5.4.1) Structure of the avatar

5.4.2) Physical setup

5.4.3) Levels of presence (point of presence)

5.4.4) Dimension

5.4.5) Velocity

5.5) Navigation data classes

5.6) Mapping of the navigation classes

5.6.1) Switching to vehicle class


6) Metaphor layer

6.1) Mapping and processing the metaphor classes

6.2) Extending the metaphor layer

6.3) Why should we use metaphors ?


7) Implementation issues

7.1) Connecting input devices

7.2) Connecting applications

7.3) Requesting data classes from the Mapper

7.3.1) Defining properties common to all requests

7.3.2) Grouping requests together

7.3.3) The request identifier

7.3.4) Commands

7.3.5) Reports

7.3.6) Mapping results

7.3.7) The avatar configuration data

7.4) Messages

7.5) The configuration file

7.6) Processing the requests

7.6.1) Processing device requests

7.6.2) Processing interaction requests

7.6.3) Processing navigation requests

7.6.4) Processing metaphor requests

7.7) Hardware platform

7.8) Future possibilities


8) Evaluation

8.1) Experiment 1 - Control of one or more vehicles

8.1.1) Using interaction classes

8.1.2 )Using navigation classes

8.2) Experiment 2- Navigation with different device configurations

8.3) Concluding remarks



Appendix A - Virtual environment applications

A.1) "Virtual environment" vs. "virtual reality"

A.2) Examples of virtual environment applications

A.2.1) Training using simulators

A.2.2) Architectural visualization

A.2.3) Scientific visualization

A.2.4) Teleoperation

A.2.5) Medicine

A 2.6) Teleconferencing

A 2.7) Entertainment


Appendix B - Remodeling the real world

B.1) Virtual worlds: remodeling the real world

B.1.1) Basic components of the real world

B.1.2) How is information transmitted in real world ?

B.1.3) The human senses

B.1.4) The "inhabitants" of virtual worlds


Appendix C - Navigating and interacting in real world

C.1) Navigating and interacting in real world

C.1.1) Generic interaction: selection and manipulation

C.1.2) Navigation in real world

C.1.2.1) Which devices does a human use for movement ?

C.1.2.2) Senses used in navigation

C.1.2.3) Cues used in navigation


Appendix D - Generic interaction: selection and manipulation

D.1) Generic interaction: selection

D.1.1) "Visual" vs. "direct" selection

D.1.2) "Near" and "far" selection

D.2) Generic interaction: manipulation

D.3) Navigation vs. selection vs. manipulation


Appendix E - The input channel

E.1) The input channel

E.1.1) Structure of the input channel

E.2) A more detailed list of input devices

E.2.1) The keyboard again

E.2.2) The Footmouse

E.2.3) Graphic tablets

E.2.4) Touchscreen devices

E.2.5) Light pens

E.2.6) Voice input systems

E.2.7) Video cameras

E.2.8) Eyetracker

E.2.9) Exotic input devices


Appendix F - The output channel

F.1) The output channel

F.1.1) Purpose of the output devices

F.1.2) Output device overview

F.1.2.1) Visual output devices

F.1.2.2) Auditory output devices

F.1.2.3) Tactile and force feedback devices

F.1.2.4) Motion output devices

F.1.2.5) Olfactory and taste devices


Appendix G - Theoretical categorization of input devices

G.1) Theoretical categorizations of input devices

G.1.1) Foley et al.

G.1.2) Buxton

G.1.3) Macinlay et al.

G.1.4) Theoretical approach developed for the Mapper

G.1.5) Usability of input and output devices


Appendix H - Levels of immersion in virtual environments


Appendix I - Goal navigation


Appendix J - The use of modifiers

J.1) The use of modifiers

J.2) The "share" flag


Appendix K - Internal structure of the Mapper

K.1) Internal structure of the Mapper

K.2) The main loop


Appendix L - Development of the report structures


Appendix M - Single, polling or continuous mode


Appendix N - Input devices supported by the Mapper

N.1) Input devices supported by the Mapper

N.1.1) Keyboard

N.1.2) Mouse

N.1.3) Trackball

N.1.4) Analog joystick

N.1.5) Digital joystick

N.1.6) Tracker

N.2) Integration of new device types


Appendix O - Width search vs. depth search


Appendix P - Internal representation


Appendix Q - Navigation models - implementation details

Q.1) Navigation model - implementation details

Q.2) Physical tracker setup

Q.2.1) Walking human

Q.2.2) Flying human

Q.2.3) Vehicle