USF Libraries
USF Digital Collections

Direct 3d interaction using a 2d locator device

MISSING IMAGE

Material Information

Title:
Direct 3d interaction using a 2d locator device
Physical Description:
Book
Language:
English
Creator:
Ansari, Anees
Publisher:
University of South Florida
Place of Publication:
Tampa, Fla.
Publication Date:

Subjects

Subjects / Keywords:
mouse
direct3d
input
graphics
computer
Dissertations, Academic -- Computer Science -- Masters -- USF   ( lcsh )
Genre:
government publication (state, provincial, terriorial, dependent)   ( marcgt )
bibliography   ( marcgt )
theses   ( marcgt )
non-fiction   ( marcgt )

Notes

Abstract:
ABSTRACT: Traditionally direct 3D interaction has always been limited to true 3D devices whereas 2D devices have always been used to achieve indirect 3D interaction. Till date no proper research has been done to try and extend the use of mouse to direct 3D interaction. In this research we explore the issues involved with using the mouse to accommodate the additional degrees of freedom required for 3D interaction. We put forth a unique and innovative design to achieve this objective and show that even a device as simple as the mouse can be highly effective for 3D interaction when supported by an appropriate underlying design. We also discuss in detail a software prototype "Direct3D" that we have developed based on our design and hope to take a step towards making direct 3D interaction easy, inexpensive and available to all computer users.
Thesis:
Thesis (M.S.C.S.)--University of South Florida, 2003.
Bibliography:
Includes bibliographical references.
System Details:
System requirements: World Wide Web browser and PDF reader.
System Details:
Mode of access: World Wide Web.
Statement of Responsibility:
by Anees Ansari.
General Note:
Title from PDF of title page.
General Note:
Document formatted into pages; contains 102 pages.

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 001416908
oclc - 52758066
notis - AJJ4760
usfldc doi - E14-SFE0000046
usfldc handle - e14.46
System ID:
SFS0024742:00001


This item is only available as the following downloads:


Full Text
xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam Ka
controlfield tag 001 001416908
003 fts
006 m||||e|||d||||||||
007 cr mnu|||uuuuu
008 031010s2003 flua sbm s000|0 eng d
datafield ind1 8 ind2 024
subfield code a E14-SFE0000046
035
(OCoLC)52758066
9
AJJ4760
b SE
SFE0000046
040
FHM
c FHM
090
QA76
1 100
Ansari, Anees.
0 245
Direct 3d interaction using a 2d locator device
h [electronic resource] /
by Anees Ansari.
260
[Tampa, Fla.] :
University of South Florida,
2003.
502
Thesis (M.S.C.S.)--University of South Florida, 2003.
504
Includes bibliographical references.
516
Text (Electronic thesis) in PDF format.
538
System requirements: World Wide Web browser and PDF reader.
Mode of access: World Wide Web.
500
Title from PDF of title page.
Document formatted into pages; contains 102 pages.
3 520
ABSTRACT: Traditionally direct 3D interaction has always been limited to true 3D devices whereas 2D devices have always been used to achieve indirect 3D interaction. Till date no proper research has been done to try and extend the use of mouse to direct 3D interaction. In this research we explore the issues involved with using the mouse to accommodate the additional degrees of freedom required for 3D interaction. We put forth a unique and innovative design to achieve this objective and show that even a device as simple as the mouse can be highly effective for 3D interaction when supported by an appropriate underlying design. We also discuss in detail a software prototype "Direct3D" that we have developed based on our design and hope to take a step towards making direct 3D interaction easy, inexpensive and available to all computer users.
590
Adviser: Piegl, Les
653
mouse.
direct3d.
input.
graphics.
computer.
690
Dissertations, Academic
z USF
x Computer Science
Masters.
773
t USF Electronic Theses and Dissertations.
4 856
u http://digital.lib.usf.edu/?e14.46



PAGE 1

Direct 3D Interaction Using A 2D Locator Device by Anees Ansari A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Computer Science Department of Computer Science And Engineer ing College of Engineering University of South Florida Major Professor: Les Piegl Ph.D. Murali Varanasi, Ph.D. Nagarajan Ranganathan, Ph.D. Date of Approval: Ju ly 1 2003 Keywords: computer, graphics, input, mouse, direct3d Copyright 2003 Anees Ansari

PAGE 2

DEDICATION To Mom, Dad, Trishna, Nafees, Nani and Nana.

PAGE 3

ACKNOWLEDGEMENTS I would like to thank Dr. Piegl my Major Professor for his invaluable help and support during these two years at USF. He has been a big source of encouragement and has always unfai lingly managed to inspire me to accomplish even the most difficult of tasks. Without his guidance and his consistent support this work would not have been possible. I want to thank Dr. Varanasi and Dr. Ranga for all the support they have extended to me bot h, as committee members and as faculty members at USF. They are two of the most wonderful people I have ever come across. I want to thank Dr. Perez for being an excellent graduate program director and for helping me immensely in every way he could. I wish to thank Daniel Prieto for helping me all along these years, as my boss and as a friend. I owe him a lot for giving me an opportunity and helping me learn. And finally I wish to thank my parents, my relatives and friends without whose love, prayers and sup port I would not be what I am today.

PAGE 4

i TABLE OF CONTENTS LIST OF FIGURES ABSTRACT CHAPTER 1 INTRODUCTION 1 .1 Motivation 1 2 Detailed problem statement 1 3 Contribution of thesis 1 4 Outline of thesis CHAPTER 2 PRIOR WORK 2 .1 Past research in 3D interaction using 6DOF input devices 2 .2 Past r esearch in 3D interaction using 4DOF input devices CHAPTER 3 INTERACTION DEVICES 3 .1 Characteristics 3 .2 Devices CHAPTER 4 3D INTERACTION 4 .1 Direct and indirect 3D interaction 4 .2 Disadvantages of indirect interaction 4 .3 Advantages of direct interactio n 4 .4 Disadvantages of true 3D devices 4 .5 Advantages of the mouse and extending its use to 3D interaction CHAPTER 5 CHALLENGE S OF 3D INPUT USING THE MOUSE CHAPTER 6 DIRECT 3D INTERACTION WITH A 2D LOCATOR DEVICE 6 .1 Workspace and viewpoint 6 .2 Input map ping 6 .3 Transformations and operations 6 .4 Feedback 6 .5 Development system specifications CHAPTER 7 APPLICATIONS iii viii 1 1 2 3 3 5 7 9 14 14 17 23 23 23 24 25 26 29 33 33 36 38 46 56 57

PAGE 5

ii CHAPTER 8 EXAMPLE SESSION 8 .1 Modeling the nose 8 2 Modeling the front fuselage 8 .3 Modeling the middle fuselage 8 .4 Modeling the back fuselage 8 .5 Modeling the left wing 8 .6 Modeling the right wing 8 .7 Modeling the tail 8 .8 Modeling the left horizontal stabilizer 8 .9 Modeling the right horizontal stabili zer CHAPTER 9 CONCLUSIONS AND FUTURE RESEARCH REFERENCES APPENDICES APPENDIX A MAPPING ALGORITHM 63 63 67 70 72 74 76 78 80 82 84 8 6 9 2 9 3

PAGE 6

iii LIST OF FIGURES F igure 1. A 3D ball device F igure 2. A tracker device F igure 3. A 3D mouse device F igure 4. A Rockin mouse. F igure 5. A head mounted display. F igure 6. A 3D glove device. F igure 7. A bat device. F igure 8. An elastic device. F igure 9. A Mulit DOF armature device. F igure 10. A screenshot of Direct3D whe n it has just been started showing (a) The Tab view (b) The Direct3D view F igure 11. A screenshot of Direct3D showing (a) The original line plotted. (b) The original line translated 10 units along the X axis. (c) The original line translated +20 unit s along the X axis and then rotated about the X axis. (d) A line strip plotted. F igure 12. A screenshot of Direct3D showing (a) A pentagon plotted. (b) The original pentagon translated along the X axis and then scaled along the Y axis. (c) A hexagon plott ed. (d) The original hexagon translated along the X and Y axes and then rotated about the Z axis. F igure 13. A screenshot of Direct3D showing (a) An edge lift polyhedron i.e. a parallelepiped plotted. (b) A center lift polyhedron i.e. a pyramid plotted an d then rotated about the Y axis. F igure 14. The button images used in Direct3D. (a) The button images displayed when the Point tab is selected. (b) The button images displayed when the Line tab is selected. (c) The button images displayed when the Polygon tab is selected. (d) The button images displayed when the Polyhedron tab is selected. 17 17 18 18 19 19 20 21 21 35 40 42 45 46

PAGE 7

iv F igure 15. The tab images used in Direct3D. F igure 16. The cursors used in Direct3D. (a) The curso r displayed when the movement is along the X axis. (b) The cursor displayed when the movement is along the Y axis. (c) The cursor displayed when the movement is along the Z axis. (d) The cursor displayed in the Point deletion mode. F igure 17. A screenshot of Direct3D to illustrate various feedback techniques. (a) The selected button. (b) The tabs. (c) The radio buttons. (d) The status bar displaying the current state of the system and the help message corresponding to the current state of the system. (e) A part of the 3D environment showing a selected polyhedron being translated along the Y axis. F igure 18. Magnified illustration of part (a) of Figure 17 showing the button pushed effect. F igure 19. Magnified illustration of part (b) of Figure 17 showing t he tab raised effect for the Polyhedron tab. F igure 20. Magnified illustration of a part (c) of Figure 17 showing the radio buttons. F igure 21. Magnified illustration of a part (d) of Figure 17 showing the status bar displaying the current state of the s ystem and a help message corresponding to the current state of the system. F igure 22. Magnified illustration of a part (e) of Figure 17 showing (a) The legs (three lines, white in color, converging at the orange point). (b) The projections (six lines, bla ck in color, drawn on the XY, YZ and ZX planes). (c) The helper line for movement along the Y axis (the vertical black colored line) (d) Selection highlighting i.e. the selected polyhedron (magenta colored vertices). (e) The dynamic coordinate display. F i gure 23. A screenshot of Direct3D showing a pop up message displayed when the user has just deleted the last point in the workspace. F igure 24. A screenshot of 3D Home Architect. F igure 25. A screenshot of mechanical CAD software. F igure 26. An image of a person using simulation training software. F igure 27. A screenshot of medical visualization software. F igure 28. A scree nshot of animation software. 47 47 49 50 50 51 51 54 55 57 58 59 59 60

PAGE 8

v F igure 29. A screenshot of geometric modeling software. F igure 30. A 3D sketch. F igure 31. A screenshot of molecular modeling software. F igure 32. A screenshot of Direct3D illustrating the scene after the nose of the airplane has been added. F igure 33. A screenshot of Direct3D i llustrating the scene after the front fuselage of the airplane has been added. F igure 34. A screenshot of Direct3D illustrating the scene after the middle fuselage of the airplane has been added. F igure 35. A screenshot of Direct3D illustrating the scene after the back fuselage of the airplane has been added. F igure 36. A screenshot of Direct3D illustrating the scene after the left wing of the airplane has been added. F igure 37. A screenshot of Direct3D illustrating the scene after the right wing of the airplane has been added. F igure 38. A screenshot of Direct3D illustrating the scene after the tail of the airplane has been added. F igure 39. A screenshot of Direct3D illustrating the scene after the left stabilizer of the airplane has been added. F igu re 40. A screenshot of Direct3D illustrating the scene after the right stabilizer has been added and the finished model of the airplane. 60 61 62 66 69 71 73 75 77 79 81 83

PAGE 9

vi D irect 3D I nteraction U sing A 2D L ocator D evice An ees Ansari ABSTRACT Traditionally direct 3D interaction has always been limited to true 3D devices whereas 2D devices have always been used to achieve indirect 3D interaction. Till date no proper research has been done to try and extend the use of mouse to direct 3D interaction. In this research we explore the issues involved with using the mouse to accommodate the additional degrees of freedom required for 3D interaction. We put forth a unique and innovative design to achieve t h is objective and show that even a device as simple as the mouse can be highly effective for 3D interaction when supported by an appropriate underlying design. W e also discuss in detail a software prototype Direct3D that we have developed based on our design and hope to take a ste p towards making direct 3D interaction easy, inexpensive and available to all computer users.

PAGE 10

1 CHAPTER 1 IN T RODUCTION With the growing speed and dropping prices of computers, 3D graphics are becoming more and more prevalent in various areas. Research and educational tools and software, computer aided design, games, movies, animations, etc. all use 3D graphics. With the gaining popularity of 3D graphics there is a strong and noticeable need for 3D interaction. 1 .1 Motivation Interaction with computers has evolved immensely over the years. Currently 2D/2.5D graphics such as buttons, menus, dialog box es, etc. combined with 2D interaction techniques, dominate the market. The most common and popular devices being used for 2D/2.5D interaction are the mouse and the keyboard. 3D graphics portrayed using 2D displays has also gained immense popularity from e ntertainment, commercial and research viewpoints and shows a lot of potential to be a successor of 2D/2.5D graphics. Unfortunately 3D interaction using 2D displays has not grown as fast or as better and till date involves many difficulties overshadowing th e numerous advantages it has to offer. Most of the research done till date in the area of 3D interaction focuses either on achieving direct 3D interaction using specialized true 3D devices or on attaining an indirect 3D interaction using 2D devices. With this research we want to draw attention to an area that has been widely neglected in the past i.e. achieving direct 3D interaction using 2D locator devices.

PAGE 11

2 We attempt to arouse interest towards this new promising approach to 3D interaction by putting fo rth a novel design to demonstrate the practicality and the advantages of the approach. Our design will also inherently promote and facilitate the use of 2D locator devices for direct 3D interaction. These devices are inexpensive, widely available and popul ar and will thus avoid the need to buy and use expensive 3D devices. 1 2 Detailed p roblem s tatement Many tools such as the ones used for CAD (computer aided design) and CAM (computer aided manufacturing) as well as the ones used in numerous research areas such as geometric modeling rely heavily on 3D interaction. However true 3D displays cannot yet compete with the high quality and low cost combination offered by 2D displays. Hence 3D graphics and 3D interaction still depend on 2D displays. 2D devices suc h as the mouse and keyboard still remain the most common and preferred devices yet 3D interaction achieved using these have always been with a view to achieve precision by limiting the user to indirect interaction. On the other hand 3D interaction achieved using true 3D devices such as gloves and trackers have always been focused on achieving direct 3D interaction at the cost of precision. In this research we want to achieve a win win situation between these two opposite ends. We want to achieve Direct 3 D interaction using a 2D locator device . We want to allow users to directly interact with the 3D environment without having to compromise on precision. One additional issue of concern is that all the direct 3D interaction today relies heavily on specializ ed 3D devices. Not only are these devices expensive and difficult to obtain, but also difficult to use and learn. The high cost of these devices limits their usage to well established research and commercial organizations. With the help of this research w e aim to get rid of the cost factor involved in experiencing and/or using direct 3D interaction and want to make it available to everyone at no extra cost and without having to buy any specialized devices.

PAGE 12

3 1 3 Contribution of thesis We propose a novel appr oach to achieve direct 3D interaction using a commonly available 2D locator device, namely the mouse. We can safely assume that everyone who owns a computer has a mouse, and is familiar and confident using one. Thus using our approach neither will the user have to incur additional costs to buy a special device nor will he/she have to spend time familiarizing with the device. We have formulated a design that shifts the burden from the user to the software. The only area where the user will have to spend a l ittle amount of time will be familiarizing with the mappings and the corresponding mouse movements. Finally we have actually implemented prototype generic software using which the users can actually get a feel of what our design aims to achieve. The softw are is also aimed at building a strong base, highlighting the practicality of the design and promoting future research on achieving direct 3D interaction using 2D locator devices. 1 4 Outline o f t hesis The initial part of our research gives a brief overvi ew of all the work done in the field of 3D interaction. Here we discuss the various approaches that have been employed in the past to achieve both direct and indirect 3D interaction. Also we review the various techniques that have been employed to make 3D interaction easier for the end user. We then shift our focus on the various interaction devices, their characteristics, advantages and disadvantages. This gives us an insight on how the different characteristics of the devices gain and lose importance dep ending on the application they are being used for and the environment they are being used in. This is followed by a detailed overview on 3D interaction. Here we talk about direct and indirect 3D interaction and the advantages and disadvantages of each. We then justify our intent to combine both approaches to get a unique style to achieve 3D interaction using 2D locator devices and the various advantages it will have to offer.

PAGE 13

4 Before formulating a methodology to achieve our desired goal of achieving 3D inte raction using the mouse, we enumerate the various challenges such as mapping of the device, visualization of workspace, etc. that we ha d to overcome. Next we explain the design we have come up with to achieve direct 3D interaction using the mouse and brie fly discusses the software implementation of the same. Towards the end we mention some of the applications where such a design might prove useful followed by some possible example sessions. Finally we draw inferences from our research work and suggest pos sible areas and enhancements for future research work on the topic.

PAGE 14

5 CHAPTER 2 PRIOR WORK 3D interaction has always been a challenging topic for computer graphics. However due to its immense importance in areas such as CAD, 3D geometric modeling, etc. it remains one of the most popular areas for research. Current advancements in technology have enabled graphic output far better than could be imagined ten years ago. Graphic input however, which is of equal importance, has lagged far behind. To improve th e quality of input in graphic interfaces Buxton [1] suggests that we should not look at the input device as an independent entity. The device should be examined in a more global and holistic view as a part of the system. He also suggests that we should exa mine input devices as closely as possible and at various levels of detail to uncover all its characteristics and then use the ones which will be of advantage to us. An additional advantage of such analysis is that most of the times problems which arise at a particular level of detail are solved or eliminated by making minor adjustments or changes at a different level. A lot of research has been done in the past on ways to achieve 3D interaction. Some use true 3D devices, which provide six Degrees of Freed om (DOF) to achieve direct interaction and some use 2D devices, which provide four DOF, to achieve an indirect interaction. Hand [2] i n his paper has surveyed the techniques used to perform 3D object manipulation and navigation. According to him most of t he 3D interaction application programs have three common domains:

PAGE 15

6 1. Object manipulation. 2. Viewpoint manipulation. 3. Application control. He also states that the research in this area can be divided into 2 generic phases: 1. Evolution of techniques based on the use of the 2D devices such as the mouse. 2. New ideas generated when true 3D input devices came into picture. Nielson and Olsen [3] have used a technique called triad mouse . They have devised a technique to directly manipulate 3D objects using 2D locator d evices. They use a three pronged cursor which they call triad . Using a 2D device the user can manipulate the triad mak ing the system function like a 3 D mouse. They use a framing cube to serve as a frame of reference to help the user estimate the actual c ursor position. Hinckley, Tullio P ausch et. al. in their paper [4] advocate, that we should focus on finding novel ways to achieve 3D interaction and not argue on the advantages and disadvantages of each approach since each approach will excel for some a pplications and prove futile for some others. A few important basic factors that are helpful when designing any interface to achieve 3D interaction are mentioned below. [2][5] 1. The environment should be as transparent as possible. The user should feel as t hough he is interacting with real world objects and not with an interface. 2. The priority of the interface should be to allow the user to work using tasks which seem appropriate and natural and not the ones which will simplify things for the computer. 3. The in terface should be easy to learn and to use. The gap between Gulf of Execution (knowing what to do) and Gulf of Evaluation (knowing how to do it) should be bridged. 4. The interface should promote speed and accuracy.

PAGE 16

7 5. Tactile and/or force and/or kinesthetic feedback should be used if possible. Kinesthetic feedback enables the user to know the position of his/her limbs relative to the rest of the body i.e. spatial awareness. 6. The user should be able to feel his/her presence in the virtual environment and shoul d be able to gather information from the surroundings. 7. Viewpoint manipulation design should be given a lot of attention. 8. If possible viewpoint manipulation should also provide feedback. 9. The user may be allowed to choose between two views. a. Egocentric: Where the user is at the center of the space. b. Exocentric: Where the user gets a feel of looking at space from outside. 10. A virtual representation of the actual input device being used can be employed. It is not necessary to incorporate all of the above factors i n order to design a good interface. The designers, depending on their applications, can assign appropriate weights to each of the factors. 2 .1 Past r esearch i n 3D i nteraction u sing 6DOF i nput d evices 6DOF devices help make the interaction more natural and intuitive for the user [6] Some of the significant researches done in this area will be discussed in this section. Boritz and Booth [7] have studied the users ability to locate a 3D point, using a 6 DOF input device, in a computer simulated virtual env ironment. In their research users had to perform two tasks, namely: 1. Point location: Moving a 3D pointer to a specific fixed point in the virtual 3D environment. 2. Interactive path tracing: Following a path in the virtual 3D environment. Using four visual fee dback modes: 1. Fixed viewpoint monoscopic perspective. 2. Fixed viewpoint stereoscopic perspective. 3. Head tracked monoscopic perspective. 4. Head tracked stereoscopic perspective

PAGE 17

8 Liang and Green [6] have researched to achieve geometric modeling using a 6 DOF input device. Most of their experiments have been performed using a hand held Isotrak sensor, namely the Bat [8] The position and orientation of the bat is monitored in real time and then displayed on the screen in the form of a Jack [9] a 3D cursor. Ka uf man, Yagel and Bakalash [10] have implemented an interface for direct 3D interaction with objects and their visceral exploration. Their interface uses a 3SPACE Polhemus Isotrak and VPL DataGlove and a corresponding 6D cursor, namely the Jack . The workspa ce view is a 3D rectilinear space in perspective view and the user movements are restricted to this frame. To help in generating, manipulating and viewing sampled and/or synthetic volumetric objects a volume editor edvol is provided as an integral part o f the environment. A three dimensional surface modeling program 3DM that uses a head mounted display to simplify 3D manipulation and understanding has been developed by Butterworth [11] The program is based on techniques used in CAD and drawing program s and applies those techniques to modeling in a true 3D environment in an intuitive way. Some researchers have even proposed using both hands in combination with 6DOF devices to enhance productivity and naturalness. Cutler, Frohlich and Hanrahan [12] use a tabletop virtual reality device called Responsive Workbench in combination with a system that allows users to manipulate virtual 3D model with both hands. They found the coordinated and asymmetric two handed interactions interesting and have concluded that in such a system both the hands perform distinct small subtasks in a synergistic way to accomplish a bigger complex task. In a similar research Sachs, Roberts and Stoops [13] have studied direct 3D interaction by using a pair of hand held 6DOF device s. They deduce that the simultaneous use of two hands has a sort of an inbuilt S R feedback, since the users know

PAGE 18

9 the relative position of their hands. Also such a design increases the speed and quality of the design. 2 .2 Past r esearch i n 3D i nteraction u sing 4DOF i nput d evice s One of the best researches in this area has been done by Branco, Costa and Ferreira [14] In their paper they state that sketching in the conceptual phase allows one to explore high level design decisions at low cost. However the cr eation of 3D shapes using CAD tools is difficult and time consuming and hence these tools are kept away from the conceptual phases of design. They want to propose a solution for this issue by providing to the designers a tool which will be simpler and fast er to use but will be as powerful as a CAD system The authors mention that most of the industrial products used originated from pencil and paper that are nothing but 2D input devices. Therefore the system they have designed is intended to work with 2D de vices and aims to combine simplicity and intuition with the useful features of the modelers. They call it IDeS: Intuitive Design System . IDeS aims at providing an as simple an interface as possible. With IDeS the user performs three tasks: 1. Drawing: The user draws as he would in a conventional drawing package the only exception being that if a trivalent junction appears then the system employs perceptual analysis to store some information about it to use in 3D reconstruction. 2. Picking a modeling tool: The user picks the tool he wants to use, but the system decides if it has enough information to execute the command associated with the appropriate tool. If not the execution is postponed till the user draws the information that is missing. 3. Explaining: The use r has to provide some information about the drawing to the system e.g. when the drawing is finished or to convert a free hand drawn line to a straight line segment, etc.

PAGE 19

10 When drawing in IDeS the objects must be drawn without hidden lines. When the drawing is finished the user must explicitly tell the system that it is a 3D representation. The system then calculates the full y visible, partially visible faces and hidden faces. When editing in IDeS the system employs Gluing . Using gluing a straight line be comes a poly line after transformation and a closed poly line becomes a polygonal mesh. Boolean operation can further be performed using the polygonal mesh so obtained. i.e. a union or difference operation can be done on the object and the polygonal mesh. To obtain complex objects. The drawing engine employed by IDeS performs four main tasks: 1. Drawing Graph Management: This module manages the graph that describes at each moment the 2D drawing. 2. Perceptual Analysis: This is used to manage all the accesses to the junction dictionary and to classify junctions depending on the angles between the intersecting lines. 3. 3D Reconstruction: This component warns the user if the drawing cannot be interpreted as a 3D model else it attempts to reconstruct the solid using an algorithm which has four basic steps: a. Virtual camera positioning. b. Gluing of the first junction. c. Visible part reconstruction. d. Hidden part reconstruction. 4. Drawing Events Generation: Module which accepts feedback from the above three modules and gives rise t o drawing events depending on the feedback. The authors conclude their paper by saying that though IDeS is still a prototype it has received considerable amount of praise by architects and designers who tested it. Likewise SKETCH the interface develop ed by Zeleznik, Herndon and Hughes [15] allows users to rapidly conceptualize and edit approximate 3D scenes. It uses a

PAGE 20

11 simple non photorealistic rendering mechanism and a purely gesture based interface with pre defined gestures that accepts simple line dr awings as input. All the operations are performed in the 3D scene i.e. a single orthographic view with the help of a three button mouse with occasional use of one modifier key on the keyboard. The user has to simply sketch the salient features of any of a variety of 3D primitives. SKETCH then uses four simple placement rules and draws the corresponding 3D primitive in the 3D scene. A few more important functionalities in SKETCH are mentioned below: 1. The camera can be manipulated using gestures. 2. Automatic grouping mechanism can be used to help apply aggregate transformations. 3. Since less semantic information is stored the user may be required to explicitly sketch constraints also. The authors conclude by agreeing that SKETCH is just in its early stages and a lot more study needs to be done in order to make it better and to help increase the range of its applications without compromising on simplicity. Shoemake [16][17] has devised an input technique called the Arcball to adjust the spatial orientation of an object in a 3D environment. Arcball uses the mouse as the input device and achieves a kinesthetic agreement between the mouse movement and object rotation by constant interpretation of the mouse motion and association with the corresponding mapping. It blends human factors and mathematical fundamentals well and provides consistency and rich feedback. However the disadvantage is that Arcball cannot control translation and scaling as is usually required in any 3D interaction software. Theoretically with a single drag the user can rotate an object 360 degrees, around any axis, using the Arcball [4] Practically however the users find it complex to achieve such a rotation and compose a 360 degree rotation of multiple small rotations.

PAGE 21

12 Chen, Mountford and Sel len [18] describe a technique called Virtual Sphere which is very similar to the Arcball The Virtual Sphere is a mouse driven 2D interface which can be thought of as a virtual trackball. The user clicks and drags on the object shown on screen and the co mputer interprets these to rotate the object correspondingly. The third degree of freedom is provided by enclosing the object in a circle and detecting clicks and drags outside the circle. These clicks and drags cause rotation of the object about an axis p erpendicular to the screen. Hinckley Tullio and Pausch [4] compare the Arcball and Virtual Sphere and conclude that the Arcball is more mathematically sound and avoids hysteresis effect. Hysteresis is the effect of not producing closed loops of rotatio n by corresponding closed loops of mouse motion, in simple words it means that reversing the sequence of drags will not return the object to its original position. Chen, Mountford and Sellen in their research [18] aim to achieve an optimum solution to dir ect manipulation and positioning of 3D objects in real time using 2D control devices. In their paper they say that various controllers are used to manipulate the 3D objects dynamically and go on to discuss four such controllers primarily used for rotation. 1. Sliders. 2. Overlapping Sliders. 3. Continuous XY+Z. 4. Virtual Sphere All of these achieve to be as simple as possible both in appearance and in use, to enable the user to focus on the task rather than the interface. They have even studied and compared the effe ctiveness of each of the controllers in diverse situations. Their studies indicate that simple, single axis rotations were performed faster by using the sliders (both conventional and overlapping) and complex tasks were performed faster using the XY+Z and Virtual Sphere interfaces. They

PAGE 22

13 conclude based on the studies that the Virtual Sphere was clearly superior in terms of speed when complex rotations were needed and was also reported by the subjects as the interface providing the most natural feel. Zelezn ik, Herndon, Robbins, et al. [19] have implemented a toolkit to construct 3D widgets. This toolkit makes 3D object generation faster and easier for non technical users. Also since construction of 3D widgets is inherently geometric, this toolkit imparts a n atural feel to the construction of these widgets by employing direct manipulation of primitives to create the desired widget. It also provides the user with the power to link two or more primitives thus easing the construction of the more complex 3D widget s. Researchers [20] have even tried the simultaneous use of two 6DOF devices in the two hands. They suggest that one handed input is less natural and less efficient and two handed input has the potential for implementing interfaces that are more natural a nd simpler thus enhancing efficiency. Two handed input can split a compound task into two possibly parallel tasks controlled by both hands [21] 3D navigation is required in numerous interactive graphics and virtual reality applications and a lot of resea rch focuses on the issue. One of the best papers is the by Hanson and Wernert [22] in which they discuss 3D navigation using a mouse. Thus we can see that a lot of research has been done in the field of 3D interaction but hardly any focuses on making dire ct 3D interaction possible with 2D devices. This is the reason that makes our work fresh and innovative.

PAGE 23

14 CHAPTER 3 INTERACTION DEVICES Over the years the devices that we use to interact with computers have changed drastically [23] Input devices set, c onstrain and bring out numerous actions and responses from the user. To be able to design good 3D interaction software we must be aware of the devices that may be used with the software, their characteristics, advantages and disadvantages. In this chapter we shall first take a look at the characteristics of the devices followed by a brief overview of few important, uncommon devices. 3 .1 Characteristics C haracteristics are very important and influence the wa y the user will employ a device [4][23][24][25][2 6][27]. 1. Affordance /Form Factor : The device should inherently suggest to the user how it is supposed to be used. Users have often reported diametrically opposite impressions of devices that differed only in their physical housing 2. Tactile cues: The device sh ould have strong tactile cues which give the user the perception of the preferred way of holding it. In the absence of such cues the user may be unsure of the correct way of using the device 3. Grasp: There are two types of grasp. Power grasp is when the de vice is held against the palm in a fixed orientation. The word power is used because the posture emphasizes strength and security of grip. The second type of grasp known as Precision grasp involves pads and tips of the fingers. The word precision is used because it emphasizes dexterity and free tumbling of the device being used. 4. Device acquisition time / Time to grasp: This signifies the amount of time it takes to engage the device if the hands were currently being used for some other device or some o ther task.

PAGE 24

15 5. Clutching: It is a property of relative positioning devices. It involves, disengaging, adjusting and re engaging the device to extend its field of control. 6. Resolution: It is the smallest incremental change in the device position that can be per ceived. 7. Sampling rate: It is the number of times per second the position of the device is recorded. This factor is very important for real time 3D environments because the devices used in such a situation have to respond to natural and fast movements of hu mans. 8. Lag: Amount of time taken to update the display in response to pre defined events. The most difficult part is detecting the source of the lag. 9. Control Display Gain: This represents the ratio of the motion of the device (control) to the corresponding movement of the cursor on the display. It represents trade off between rough positioning and fine positioning. 10. Input Output mappings: This determines which movements of the device cause corresponding movements on screen of the cursor. 11. Gestures: Gestures ar e the most common interaction paradigm. The best way to use gestures is to map them directly to user intention. 12. Tactile and force feedback: This implies t he property of a device to provide a force in response to pre defined movements/interactions. 13. Multi mo dal input: This implies the p roperty of a device to merge two or more modes of input such as speech, touch, etc. 14. Pointing speed/ Bandwidth: This property represents the speed of target selection using the device. 15. Pointing precision: This property represent s the smallest target that can be easily selected using the device. 16. Time to learn: This represents the amount of time it takes to learn the device operation. 17. Desk footprint: This represents the amount of physical space the device takes on the desk. 18. Cost: T he price of the device. 19. Fatigue: This corresponds to the tiring of the user when using the device.

PAGE 25

16 20. Sticky/Free: Sticky devices are the ones which have some sort of a mechanism to prevent changes along other axes when one axis is used. Free devices on the o ther hand have no such mechanism. 21. Orthogonal/Nested: Orthogonal devices have fixed frames of reference, whereas nested devices do not. 22. Rotation/Translation: Rotation devices are the ones that operate by rotation, e.g. trackball and translation devices are the ones that operate by translation e.g. mouse. 23. Unbounded/Bounded: Devices that have no physical limits on the field of control are called unbounded devices, whereas the ones that do are called bounded devices. 24. Homogenous/Distinguished position: Homogenou s devices cannot be set to a remembered physical position, whereas distinguished position devices can. 25. Volatile/Non volatile: Volatile devices cannot retain their physical position when released, but non volatile devices can. 26. Inertial/Inertia less: Inertia l devices keep moving for a short distance and time when released. Inertia less devices do not exhibit this tendency. 27. Held up/Body mounted: Held up devices need to be held using the hand and cannot stand on their own. Body mounted devices do not need exter nal support and their body helps them stand on their own. 28. Sense: This is the property of a device to sense certain characteristics. There are three common types of sensing devices. Position sensing devices sense their position and orientation. Motion sensi ng devices sense the distance/angle they have moved from a particular position. Force sensing devices sense the amount of force being applied on them.

PAGE 26

17 3 .2 Devices Figure 1 A 3D ball device. Image taken from [4] 3D Ball [4]: A 3D ball is a sm all spherical plastic device about 2 inches in diameter. It is basically used to rotate objects. It encloses a tracker and provides 6DOF. The form factor of the ball is very good, since humans inherently tend to rotate spherical objects. Its surface howeve r offers very few tactile landmarks preventing users from getting a clear conception of the device. The cord of the ball is also another major hindering factor which the users find annoying, since it is very heavy and often gets in the way of a rotation. Figure 2 A tracker device. Image taken from [4] Tracker [4][28] A tracker is basically a 3D ball without the spherical encasing. It comes many shapes but the most popular one is the rectangular one. It suffers from the same advantages and disadvantag es as a 3D ball. There are various types of trackers

PAGE 27

18 available depending on the technology such as optical tracker, acoustic tracker and magnetic tracker. Optical tracker uses light waves to detect the position and orientation. Similarly acoustic tracker e mploys sound waves and magnetic tracker uses magnetic waves. Figure 3 A 3D mouse device. Image taken from [28] 3D Mouse [28]: A 3D mouse is very similar to a 2D mouse the only difference is that it has a roller to move the cursor farther from and cl oser to display. Figure 4 A Rockin mouse. Image taken from [29] Rockin Mouse [29]: A Rockin mouse is an extension of the 2D mouse to allow the user to work using 6DOF if desired. It has curves at the side and can be tilted to achieve

PAGE 28

19 manipulation al ong an extra dimension. Since it is backward compatible to a mouse it is a very practical 3D input device. Figure 5 A head mounted display. Image taken from [47]. Head Mounted Display [11]: A Head mounted display is used to give the user the feeli ng of being within three dimensional space. It helps the user to better understand the relationships between the 3D objects. Its main disadvantages are its weight and the fact that it literally cuts the user off from the real world. Figure 6 A 3D glo ve device. Image taken from [48].

PAGE 29

20 3D Glove [30]: A Data glove is a basically a tracker in the form of a glove. Besides detecting position and orientation it also senses gestures input using fingers. It aims at utilizing the dexterity and skill of the user s hand. There have been various popular glove technologies a few of which are Sayre glove, MIT LED glove, Digital data entry glove, etc. Glove based input can be best used in applications such as sign language interpretation, computer based puppetry, music al performance, etc. Figure 7 A bat device. Image taken from [45]. Flying Mice [26][8]: These are devices based on the mouse but which are operated by holding, moving and rotating them in air. The most popular flying mice device is the Bat . These devices are easy to learn because of the natural and direct mapping and also have fast speed of operation. However their disadvantages are limited movement range, lack of coordination, fatigue and difficulty in device acquisition.

PAGE 30

21 Figure 8 An elastic device. Image taken from [26][44]. Elastic devices [26]: These devices are force operated. They move in proportion to the force applied on them but have a self centering mechanism which causes them to return to their original position once released. They have the advantage of providing both displacement and force feedback. Figure 9 A Mulit DOF armature device. Image taken from [26][44] [46 ]. Multi DOF armatures [26]: These are mechanical armatures pivoted at a one end point and calculate the relative position of the other end point, which is then mapped onto

PAGE 31

22 a corresponding cursor movement on the screen. The major disadvantage with these devices is their limited applicability and constrained operation. In this chapter we have discussed numerous input devices and their characteristics. There are still a lot of unexploited scenarios which will probably be used in the future. With the emergence of every new technique old problems will disappear and new ones will surface. However the aim should always be to use technology to make interaction better and easier and [4] we should keep in mind that the choice of a device will always depend on the application it is required to support and the intended users.

PAGE 32

23 CHAPTER 4 3D INTERACTION 3D interaction is a complex and challenging task. Various techniques have been used in the past to make this interaction process comfortable and easy for the user. In this chapter we will first define direct and indirect 3D interaction. We will then discuss in brief about the ma jor disadvantages of using the indirect approach and the corresponding advantages when using the direct approach. After justifying the need for a direct approach to 3D interaction we will list the significant disadvantages of using true 3D devices followed by the advantages of using the mouse. 4 .1 D irect a nd i ndirect 3D i nteraction If the interaction process requires the user to model, manipulate or orient an object by making changes to the object itself the interaction is classified as direct interacti on. On the other hand if the interaction process requires the user to model, manipulate or orient an object by making changes to one or many controllers linked to the object, the interaction is classified as indirect interaction. Currently d irect 3D inte raction is mostly achieved using true 3D devices and indirect 3D interaction is achieved using 2D devices. 4 .2 Disadvantage s o f i ndirect i nteraction Indirect interaction al though easier to achieve has certain distinct disadvantages. Venolia in his paper [ 28] state s that although everyday more and more 3D tools surface they are not gaining popularity because of their complicated indirect interface. Below we mention the main disadvantages that users face when using the indirect approach.

PAGE 33

24 1. Unnatural: It is n atural human tendency to directly touch any object which h e /she desire s to manipulate or orient. This makes i ndirect interaction unnatural since the user has to manipulate the object by making changes to a controller and not the object itself. 2. Poor feedba ck: Using indirect interaction the user interacts with a controller and the corresponding changes are reflected on the object. This forces him to divide his attention between two areas, namely the controller which he is manipulating and the object which i s being indirectly affected. This causes the feedback from the controller to the user and the feedback from the object to the user to interfere with each other rendering the overall feedback poor. 3. Complex to understand: In indirect interaction changes to a controller placed somewhere far from the actual object causes modification s to the object. This indirection often beco mes complex to understand. 4 .3 Advantage s o f d irect i nteraction Direct interaction may be difficult to achieve but hol ds noteworthy adva ntages. Chen, Mountford and Sellen in their paper [18] emphasize the strong need for simple direct manipulation in numerous ar eas such as engineering design and architecture. Similarly Sachs, Roberts and Stoops [13] conclude that the easiest way to develop 3D models on the computer is by directly drawing in the 3D environment. The significant advantages of using the direct approach are mentioned below. 1. Natural: Using d irect interaction changes to the object are made directly to the object itself This is in conformance with natural human interaction tendency 2. Rich feedback: Using direct interaction the user has to focus his attention solely on the object he is currently modifying. There is no interference in the flow of feedback from the object to the user Furthermore a variety of feedbacks such as tactile, pressure, etc. can be used depending on the system resources; the application and the user requirements leading to a significan t enhance ment and ease in the interaction experience of the user

PAGE 34

25 3. Easier to understand: In direct interaction c hanges made to the object affect the object itself and there is no indirection involved This makes it is much easier to understand as compared to indirect interaction. 4. Reduced burden on user: Direct interaction is aimed at making the interaction process easy and comfortable for the user mostly at the expense of increasing the complexity of the underlying system which facilitates the interaction. This h elp s in increas ing user performance as well as enhanc ing his/her produ ctivity and efficiency. 4 .4 Disadvantage s o f t rue 3D d evices True 3D devices are designed to facilitate interaction in a 3D environment. However almost all the 3D interaction today takes place using 2D displays [6] and in a two dimensional environment [7 ] This often makes them a poor choice for most 3D interaction applications being used today. The major disadvantages of true 3D devices are listed below. 1. Precision lost due to instability: Although using a true 3D device enhances the speed with which the user can work, precision is often sacrificed in the bargain [4] These devices are tough to control due to the instability of moving th em in air. On the other hand the overall speed may in fact be reduced if we try to achieve precision using these devices 2. Expensive [4] : Most t rue 3D devices are expensive and that makes it difficult for the common users to own them Their use is hence limited to research or applications such as medical visualization, virtual reality systems, etc. 3. Complex to understand [18] : True 3D devices are uncommon and queer; hence most user s find it difficult to learn the ir use. 4. No versatility [26] : Most of the true 3D devices have limited applicability and cannot be applied to a wide range of applications. This lack of versatility is a major hindering factor for these devices. 5. Rate control [26] : Since true 3D devices offer 6DOF and are generally held in air when used, controlling the rate of these devices is a difficult task b oth to learn and to achieve

PAGE 35

26 6. Lack of control feel [26] : Oft en the 6DOF offered by these devices gives the users a lack of control feeling which hinders their productivity and efficiency. 7. Fatigue: Most of the true 3D devices are either worn on the body/head or held in air to operate them effectively. This signifi cantly increases the fatigue level of the user. 8. Device acquisition time: The time required to engage most true 3D devices is so high that users can often accomplish tasks faster without using them 9. High sampling rate requirement [23] : A very high sampling rate is required when using true 3D devices in a virtual 3D environment to achieve smooth viewing. 10. Lag [23] : In a 3D environment real time rendering of the virtual environment is a processor intensive task and may take significant amounts of time, hence t o reduce the overall lag the lag from the true 3D devices themselves should be as less as possible. 11. Poor perceptual structure [23] : Most true 3D devices have strange structures which makes them unintuitive and non metaphoric. Th is makes it difficult for t he user to construct a mental model of the se device s 12. Other e rgonomic considerations [31][6] : A lot of other ergonomic considerations such as the accuracy, pleasure, constraints offered by the human body etc. prevent effective use of true 3D devices. 4 .5 Advantage s o f t he m ouse a nd e xtending i ts u se t o 3D i nteraction Currently the mouse is extremely popular for use in 2D applications. It possess es numerous advantage ou s characteristics which contribute to its popularity. We propose to extend the use of the mouse to 3D interaction L ist ed below are the major advantages which support our proposal. 1. Popular [18 ] [24 ] [25] [32]: Currently a mouse is almost a must buy with a computer and is one of the most popular and dominant devices 2. Integration of 2D with 3D [28 ] : The mouse functions excellently in a 2D environment I f it can be extended to be used in a 3D environment we can achieve integration of 2D and 3D task domains which i n turn will enhanc e the pro ductivity and efficiency of the user.

PAGE 36

27 3. Ease of use: The mous e being very easy to use, we can safely assume that almost all the computer users have used a mouse extensively and have a highly reliable mental model of the mouse. 4. Zero device acquisition time: Hinckley, Tullio and Pausch [4] state that mouse based techn iques are slower in a work routine but if we consider the time required for switching over to a 3D device and back to the mouse, the mouse based technique becomes faster. If the use of the mouse is extended to 3D, switching between 2D and 3D tasks will not require change of device making the device acquisition time zero. 5. Good control in 3D environment [28][32] : Studies done in the past have indicated that users can easily control a 3D cursor using a mouse and may even overcome the complexity involved with u sing the mouse in a 3D environment. 6. Good form factor [29] : The physical shape of the mouse is well suited to be held in the human hand and does not restrict the user to any particular grip. 7. High stability [29] : The mouse rests on the surface of the desk, i s quite heavy and has a large area of contact with the surface of the desk. All these factors make it very stable. 8. Less fatigue [29][32] : T he mouse is not required to be worn or held in air and does not considerably increase the fatigue level of the user. The forearm can easily rest on the table when operating a mouse and since it is a relative device it needs very little arm movement. 9. Clutching easily solved: Since most of the users have a clear mental model of the mouse, they can easily understand and exe cute clutching with respect to the mouse. 10. Device to cursor mapping [29] : The mapping of the mouse to the cursor is very natural and thus reduces the cognitive load on the user. 11. Button position s [29] : The button positions on the mouse are well suited to hum an operators. The directions of buttons are orthogonal to the sensing dimension of the mouse which makes it easy to use the buttons without unintentionally moving the mouse 12. Familiarity [29] : Since the users are already familiar to the mouse and its operat ions there is a high probability that their productivity and efficiency will be h igh even when using the mouse in a 3D environment

PAGE 37

28 Foley Wallace and Chan [31] in their paper mention that any system designed for interaction must minimize the work required by three types of basic human processes; perception cognition and motor activity Perception is the process in which incomprehensible stimuli is received, transmitted to the brain and then received by the receptor organs. Cognition is the process using w hich we acquire, organize and retrieve information. Motor activity can be defined as the physical response to stimuli after perception and cognition have taken place. Our proposal of u sing the mouse for direct 3D interaction aims at facilitat ing these thre e processes

PAGE 38

29 CHAPTER 5 CHALLENGE S OF 3D INPUT USING THE MOUSE A mouse is inherently a 2D device. Using it for 3D input involves overcoming numerous challenges. Thi s chapter discuss es th e se challenges and mentions some of the design strategies we ha ve dev ise d to overcome th em Coordinated movement in 3D space which involves all the three axes at the same time should be avoided [26] and i n order to effect 3D transformations using a mouse the user will have to decompose the 3D task into a series of 1D or 2 D tasks [6] Thus the user will be forced to think in terms of one or two dimensions Additionally the curved geometry of 3D space is very different from the flat workspace of the mouse and s ince a 2D device inherently allows only for 4DOF, using it for 6D OF may seem awkward and uncomfortable to the user [16][17][28] The mapping will have to be from 2D movements of the mouse to 3D space [20][32]. Though this is possible such mappings are usually hard to predict and to a certain extent unnatural. Th is als o implies that the system will have some degree of complexity and unnaturalness and hence the user will have to spend some time learning to be able to use the syst em effectively and efficiently. In our design w e provide a functionality known as helper s a imed at assisting the user in learn ing and preferably master ing the mouse movements. Since we aim to achieve direct interaction the interface should be as transparent as possible and the user should at all times feel as if he is interacting directly with the object and not with the interface [6]. To achieve this, the mapping of the device to the cursor needs to be as simple and natural as possible As far as possible, stimulus response

PAGE 39

30 correspondence needs to be achieved between the hand movement and the on screen cursor. At the same time we should take care to see that we do not incorporate any movement that will be ergonomically worse than the normal mouse operation [6]. T he user movement should be filtered and kept track of to cause corresponding ch anges on screen A lso the interpretation needs to be consistent in order to provide positive feedback to the user to help him create a relia ble mental model of the system. For example ; if the user is moving the mouse in the X direction, the system should consistently recognize this as movement in the X direction In order to do so we frequently sample the mouse movement and based on the last two samples we interpret the direction of his/her movement. To help the user s perception, we have to increase the quality and the quantity of information displayed for him /her on screen [32] At the same time to avoid cluttering and obscuring important areas of the virtual environment there should not be too much information displayed on the screen. It is very im portant to achieve an optimum level between these two factors ; namely quality and quantity The user should be provided with an overall frame of reference and numerous cues to help him know the position of the 3D cursor in relation to the 3D environment. To facilitate this, the system provides a bounding cube which serves as an overall frame of reference. Additionally there are two major cues which we call legs and projections which help the use r to be aware of the cursors position in 3D space. If we display numerous views they occupy a big area of the screen. At the same time the size of each view becomes smaller thus reducing clarity and increasing complexity. To remedy this we use only a single orthographic view. Also since there is only one vie w to be displayed it can occupy a major area of the screen thereby enhancing clarity.

PAGE 40

31 Another similar issue is to reduce the screen space occupied by the buttons. If all the buttons are displayed all the time they eat up on the screen space that can be used to display the 3D environment. We chose to use tabs for this purpose. T abs not only reduce the screen space occupied by the buttons but also provide classification and structure to the button organization. Visibility is another factor that ha s to b e given utmost importance To enhance visibility and to prevent obscurity, all the objects are rendered as wire frame models Also all of the important areas such as the XY, YZ and ZX planes and X, Y and Z axes, etc. are rendered using colors that are not used anywhere else in the environment making them distinct and easily noticeable The user must at all times be aware of the state of the syste m i.e. he/she should know what action the system is expecting from him/her. This is facilitated by displaying a message for him /her on the status bar about the current state of the system and making the button corresponding to the current state prominent. The complexity involved with the modeling of advanced entities such as a cube, pyramid etc. should be reduce d. O ur software employs the divide and conquer strategy wherein advanced entities are developed by building on smaller ones, which are easy to model. For example to plot a square the user has to simply plot four points. Similarly to plot a cube he /she h as to plot a square ( i.e. one face of the cube) and the program automatically provides him with the rest of the faces. Constraints should be designed and implemented to make the tasks easier for both the user and the computer [3] and s ince we will not be able to use tactile or force feedback, the visual feedback should be as strong and rich as possible. Also t he interface should place low mental burden on the user, i.e. the users must not be required to remember the interface and it should come naturally to them.

PAGE 41

32 In our design utmost importance was given to all the factors mentioned above. Failure to take care of any of the above criteria w ould have rendered the interface unnatural, unintuitive, and unproductive.

PAGE 42

33 CHAPTER 6 DIRECT 3D INTERACTION WITH A 2 D LOCATOR DEVICE In this chapter we put forth a design in which we try to incorporate all the issues we talked about in the earlier chapters. We also discuss in detail the prototype software which we call Direct3D , which was implemented based on the de sign. The aim of the prototype is to help everyone understand the practicality and the advantages of the approach. At the same time we expect the prototype to serve as a basic building block for any future research in this area. 6 .1 Workspace a nd v iewpoin t Today most 3D modeling and interaction applications designed to be used with 2D devices present the user with numerous views such as top view, bottom view, left hand side view, right hand side view, etc. Such a design forces the user to interact and work in a 2D environment to build 3D objects. In our design we have tried to overcome t his drawback. The user is provided with a single orthographic view of the coordinate system, which we will henceforth call Direct3D v iew as shown in F igure 10(b) All ob jects are drawn in th e Direct3D view using the mouse. The user viewpoint is located on the line whose equation is give n as x = y = z in the first octant (where x is positive, y is positive and z is positive). The distance of the viewpoint on this line fr om the origin depends on the scale of the coordinate system being employed. A cut away view of cube (i.e. an octant) is provided as a frame of reference. All movements and transformations are limited to be within this cube. If a transformation or

PAGE 43

34 operati on causes a point to move outside this cube, the transformation or operation is not applied. All the three axes are displayed in white color and are clearly marked with the corresponding letters X, Y and Z. The three planes XY, YZ and ZX are drawn i n the form of grids using unique colors. The advantages of all the above design decisions are listed below. 1. The user is allowed to work and interact in a 3D environment. 2. Since there are lesser views, complexity is reduced. 3. The view area can be significan tly magnified since there are lesser number of views to display. 4. The time wasted in switching from one view to another is eliminated. 5. In an orthographic view the length of a line displayed on screen does not change with the depth of the line; hence it is very suitable for modeling applications where it is important to know the true length of a displayed line at all times. 6. The grid form of the XY, YZ and ZX planes makes them stand out and at the same time does not hamper visibility. 7. The viewpoint is in the middle of the s creen as well as in the middle of the coordinate system and giv es the user an exocentric view. 8. The X, Y and Z axes are clearly drawn and marked and the origin is in the center of the screen. 9. The framing cube serves as a n excellent frame of reference using which plotting entities becomes much easier. Besides the main view, Direct3D view where the entire drawing takes place there is also another small view to the left of Direct3D view which we will henceforth call Tab View as shown in F i gure 10(a) This view contains buttons which allow the user to use the various functions built in the software.

PAGE 44

35 Figure 10 A screenshot of Direct3D when it has just been started showing (a) The Tab view (b) The Direct3D view

PAGE 45

36 At the bottom of the Tab view there are four tabs labeled as mentioned below: 1. Point 2. Line 3. Polygon 4. Polyhedron Clicking on each of the se tabs causes the corresponding buttons to be displayed and the other buttons to be hidden. For example, if the current selected tab is Po int only buttons related to transformations and operations for points ( Plot Points Move Points etc.) are displayed. The tabbed view has the advantage of saving on screen space and providing a classification and structure to the button organization Al so it prevents the user from committing mistakes by hiding the irrelevant buttons. Each of the tabs and buttons has images signifying their functionality. The selected tab is raised compared to the other tabs and the selected button is displayed using the pushed down effect This functions to avoid confusion and ambiguity since the user can explicitly see which tab and which button are currently selected. Besides the buttons the Tab view contains three pairs of radio buttons. These correspond to functiona lities we call Legs , Projections and Helpers . These functionalities will be explained in the later sections. These radio buttons serve to enable/disable the corresponding functionality. 6 .2 Input m apping A significant challenge was to design a nove l and simple way to map the 2D mouse movements onto 3D space. The mapping we propose in this research relies to a significant extent on gestures, the most well known way of communicating with the users. The mapping is summarized below. 1. A horizontal movemen t of the mouse towards the right with the left mouse button held down causes increase in the X coordinates of the current selection.

PAGE 46

37 2. A horizontal movement of the mouse towards the left with the left mouse button held down causes decrease in the X coordinat es of the current selection. 3. A vertical movement of the mouse upwards with the left mouse button held down causes increase in the Y coordinates of the current selection. 4. A vertical movement of the mouse downwards with the left mouse button held down causes decrease in the Y coordinates of the current selection. 5. A diagonal movement upwards and towards the right with the left mouse button held down causes decr ease in the Z coordinates of the current selection. 6. A diagonal movement downwards and towards the lef t with the left mouse button held down causes in crease in the Z coordinates of the current selection. The direction of the intended movement is estimated by repeatedly sampling the 2D movement of the mouse and comparing it with the last two sampl es and th en using that as the basis for mapping to a 3D space movement. Pseudo code for the above mapping is outlined in Appendix A. The mapping of the mouse to the 3D space described above has the following significant advantages: 1. It is very simple to understand. There are absolutely no complex or ambiguous movements. 2. The movements are all straight line movements, each controlling only one axis, and hence are easy to control. 3. The control area for each axis are well isolated and there in no confusion due to a movem ent translating into changes to more than one axis. 4. The general assumption and mental model of most users are that the X axis is horizontal, positive towards right and negative towards left. The Y axis is vertical positive upwards and negative downwards. T he Z axis is perpendicular to the screen with the negative par t going into the screen and the posi tive part coming out of the screen. The mappings described above very closely reinforce the users assumption. Hence the movements seem very natural to the us er s 5. Since the last two samples (rather than just the last one) are used it is a n effective estimate of the user s intention.

PAGE 47

38 6 .3 Transformations a nd o perations We treat transformations and operations as modes. The system can be in only one mode at any giv en point of time. By default the system is in the P oint plotting mode. To switch to a different mode the user has to simply click on the corresponding button. The software makes the following three basic kinds of transformations explicitly available to the user. 1. Translation 2. Rotation 3. Scaling To enable ease of use and to prevent complexities the mapping described earlier is used for these transformations as well. Horizontal movement causes Translation/Rotation/Scaling of the selection in the X direction. Vertical movement causes Translation/Rotation/Scaling of the selection in the Y direction. Diagonal movement causes Translation/Rotation/Scaling of the selection in the Z direction. Building on these basic transformations the user can achieve even advanced complex transformations such as through point constraint, planar constraint, etc. The software also provides various operations to the user. These operations range from plotting simple points to plotting complex polyhedrons. Described below are each of t he se operations in detail. Plot points: In this mode t he user can move the cursor in the required direction using the generic mouse operations. Once he /she is satisfied with the location he/she can use the right mouse button to plot a point at that locati on. Move points: In this mode t he user can select an already plotted point. Once a point is selected he /she can move it to the desired location using the gener ic mouse operations. No right click is necessary to end this operation.

PAGE 48

39 Delete points: In the p oint deletion mode t he user can delete any of the already plotted points. If the point is a part of a line, the corresponding line is deleted but the other end point of the line remains untouched. Similar is the situation if the point is a part of a polygo n or a polyhedron. If there are no points in the drawing area an d the user c lick s in the drawing area a message is displayed informing him that there are no points to delete. Similar is the case if the last point in the drawing area is deleted. In such a situation a message pops up informing him that there are no more points to delete. Plot lines: To plot lines the user has to simply plot two points one after the other and the program automatically draw s a line between them. Plot line strips: To plot l ine strips the user can plot as many point as he wants consecutively, the program automatically draws lines between every two consecutive points plotted. Move lines: To move lines the user clicks on the desired line. The program then highlights the mid point of the line and the user can use this mid point as a handle to move the line to the desired location. Rotate lines: To rotate lines the user selects the desired line. After the mid point is highlighted he/she can use it as a handle to rotate the li ne a bout the desired axes to the desired orientation. Scale lines: To scale lines the user selects the desired line. Then using the highlighted mid point he/she can scale it along the desired axes.

PAGE 49

40 Figure 1 1 A screenshot of Direct3D showing (a) The original line plotted. (b ) The original line translated 10 units along the X axis. (c) The original line translated +20 units along the X axis and then rotated a bout the X axis. (d) A line strip plotted.

PAGE 50

41 Plot polygons: To plot polygons the user plots points, one less than the number of vertices he wants the polyg on to be made up of. The program draws a line between every two consecutive points plotted. T o plot the last vertex instead of a right click, he /she has to double click the left mouse b utton to signal that the vertex just plotted was the last vertex. The program then automatically draws a line between the last and the first point to close the polygon. Move polygons: To move a polygon, t he user selects the desired polygon. The program hi ghlights the centroid of the program. The user can use this as a handle to move the polygon to the desired location. Rotate polygons: To rotate a polygon, t he user selects a polygon and uses the highlighted centroid to rotate it a bout the desired axes to the desired orientation. Scale polygons: To scale a polygon, t he user selects the polygon and uses the highlighted centroid to scale it along the desired axes.

PAGE 51

42 Figure 1 2 A screenshot of Direct3D showing (a) A pentagon plotted. (b) The original pentag on translated along the X axis and then scaled along the Y axis. (c) A hexagon plotted. (d) The original hexagon translated along the X and Y axes and then rotated about the Z axis.

PAGE 52

43 To decrease the number of buttons and to reduce complexity we have incorporated two generic poly hedron plotting operations. Using these operations we can plot numerous types of regular polygons These operations are mentioned below Plot center lift poly hedro ns: By center lift we mean that after the user has finished plo tting the base polygon all the vertices of the base are connected automatically by the pro gram to the centroid of the base. The user can then move the centroid to the desired location to obtain a polyhedron. To plot a center lift poly hedron the user pl ots the base polygon. After that the program highlights the centroid of the polygon and the user can move it to the desired location. Plot edge lift poly hedr ons: By edge lift we mean that after the user has finished plotting the base polygon the program a utomatically du plicates the base polygon. All the vertices of the base are connected automatically by the program to the corresponding vertices of the duplicate polygon. The user can then move the duplicat ed polygon to the desired location to obtain a poly hedron. To plot an edge lift poly hedr on, the user plots the base polygon. After that the program duplicat es the entire base polygon and highlights the centroid of the duplicate polygon. The user can move this polygon to the desired location using the cen troid as a handle. Move polyhedrons: To move a polyhedron the user selects the desired polyhedron. The program automatically highlights the centroid of the polyhedron. The user can use the centroid as a handle to move the polyhedron to the desired locat ion. Rotate polyhedrons: To rotate a polyhedron the user selects a polyhedron. Then using the highlighted centroid he can rotate it a bout the desired axes to the desired orientation.

PAGE 53

44 Scale polyhedrons: To scale a polyhedron the user selects a polyhedron and then uses the highlighted centroid to scale it along the desired axes. Legs: When plotting / moving a point or an entity (line, polygon or polyhedron) using the centroid the program draws three straight lines from the current selected point to the thr ee planes XY, YZ and ZX. These lines are what we call Legs . These legs have bases at the point where they meet the XY, YZ and ZX planes. Legs can be turned on or off by using the pair of radio buttons provided in the Tab view Projections: When plotting /moving a point or an entity (line, polygon and polyhedron) using the centroid the program draws shadows of the legs corresponding to the current selected point onto the three planes XY, YZ and ZX. These shadow lines are what we call Projections . These p rojections pass through the base of the legs and are parallel to the corresponding axes. Projections can be set to on or off state by using the radio button pair in the Tab view Helpers: Helper s are used only to help beginners get used to the mouse mov ements. The s e are lines drawn on screen and which pass through the current cursor position. A horizontal helper line is drawn if the user movement is along the X axis. A movement along the Y causes a vertical helper line to be displayed. Similarly a diagon al helper line is displayed for movements along the Z axis They c an be turned on or off using the radio buttons provided in the Tab view

PAGE 54

45 Figure 1 3 A screenshot of Direct3D showing (a) An edge lift polyhedron i.e. a parallelepiped plotted. (b) A center lift polyhedron i.e. a pyramid plotted and then rotated a bout the Y axis.

PAGE 55

46 6 .4 Feedback The design incorporates numerous, primarily visual feedback techniques. The significant ones are listed below. Button images: Each button has an image on it which signifies the function it performs. These images are used to help the user understand the functionality of the button by just looking at them. Button images are illustrated in F igure 14 (a) (b) (c) (d) Figure 1 4 The button images used in Direct3D (a) The button images displayed when the Point tab is selected (b) The button images displayed when the Line tab is selected (c) The button images displayed when the Polygon tab is selected (d) The button images displayed when the Polyhedron tab is selected

PAGE 56

47 Tab images: Similar to the buttons each of the tabs has an image on it. The image signifies the four entities namely point, line, polygon and polyhedron. Clicking on each tab causes the corresponding buttons to be display ed and the others to be hidden. The image on the tabs helps the user know the functionality of each just by looking at it. Tab images are illustrated in Figure 15. Figure 1 5 The tab images used in Direct3D Cursor change: There are f our cursors used in the program. Three of these are the X, Y and Z cursors. (a) (b) (c) (d) Figure 1 6 The cursors used in Direct3D (a) The cursor displayed when the movement is along the X axis. (b) The cursor displayed when the movement is along the Y a xis. (c) The cursor displayed when the movement is along the Z axis. (d) The cursor displayed in the Point d eletion mode.

PAGE 57

48 The X cursor as shown in Figure 16(a), is a horizontal line with arrows at both ends and appears whenever the mouse moves along the X direction with the left mouse button held down. The Y cursor as shown in Figure 16(b), is a vertical line with arrows at both ends and appears whenever the mouse moves along the Y direction with the left mouse button held down. The Z cursor as shown in Figure 16(c), is a diagonal line with arrows at both ends and appears whenever the mouse moves along the Z direction with the left mouse button held down. These three cursors give feedback to the user to help him determine the ax i s along which he is moving The fourth and the last type of cursor used is the hand cursor as shown in Figure 16(d) It is used when in point deleti on mode. The finger can be used to exactly pin point to the point to be deleted. At the same time a hand cursor immediately conveys to the user that he is in point deletion mode.

PAGE 58

49 Figure 1 7 A screenshot of Direct3D to illustrate various feedback techniques. (a) The selected button. (b) The tabs. (c) The radio buttons. (d) The status bar displaying the current state of the system and the help message corresponding to the current state of the system (e) A part of the 3D environment showing a selected polyhedron being translated along the Y axis.

PAGE 59

50 Button pushed effect : A button when selected is displayed using a pushed effect as illustrated in F igure 18 All the other buttons are comparatively raised. This helps the user be aware at all times of the selected button by just looking at it. He/She is thus also aware of the current mode. Figure 1 8 Magnified illustration of par t (a) of Figure 17 showing the button pushed effect. Tab raised effect: The selected tab is raised as compared to the other three as illustrated in F igure 19 This gives feedback to the user and helps him know the entity which the system expects him to in teract with Figure 1 9 Magnified illustration of part (b) of Figure 17 showing the tab raised effect for the Polyhedron tab.

PAGE 60

51 Radio buttons: The radio buttons which are visible at all times are clearly marked on/off as illustrated in F igure 20 Each of the radio buttons unambiguously indicates the state of the corresponding functionality. Figure 2 0. Magnified illustration of a part (c) of Figure 17 showing the radio buttons. Help message display: The status bar displays the current mode i.e system state and a help message about the current mode as illustrated in F igure 21 The message explains to the user the methodology f or work ing in the current mode. Figure 21 Magnified illus tration of a part (d) of Figure 17 showing the status bar displaying the current state of the system and a help message corresponding to the current state of the system Legs: The legs are used to give the user a visual estimate of the distance of the current point from the X Y Y Z and Z X planes Legs are i llustr ated in F igure 22 (a). Projections: Although the projections can be used alone, they f unction best in combination with the legs. They add to the information provided by the legs and give the user a visual estimate of the distance of the current point from the X, Y and Z ax es. Projections are illustrated in Figure 22(b).

PAGE 61

52 Helpers: Helpers are designed to be used only by novice users when learning the mapping of the mouse movements to the 3D space. The program interpolates the mouse movement calculates the di rection of the mouse movement and displays the corresponding helper. This gives positive feedback to the user and helps him learn the technique of confin ing the mouse movement to a horizontal, vertical or diagonal line. Helpers are illustrated in Figure 22 (c). Selection highlighting: Any entity selected by the user is highlighted with a color (i.e. magenta) that is not used anywhere else in the workspace. This clearly displays information about the current selection. Selection highlighting is illustrated i n Figure 22(d). Dynamic coordinate display: T he coordinates of the current selected point are clearly displayed upwards and to the right of the point in the drawing area. These coordinates are updated dynamically as the point undergoes transformations. Dy namic coordinate display is illustrated in Figure 22(e). Colors for planes: The XY, YZ and ZX planes are all drawn using unique and different colors to help them stand out in the drawing area. Colors for axes: The t hree axes X, Y and Z are drawn and mark ed using the white color which is not used elsewhere in the drawing area. Grid for visibility: The XY YZ and ZX planes are drawn as grids to enhance the visibility of the drawing area. Pop up messages: Some feedback is also provided using pop up message s. For example ; w hen the last point in the drawing area is deleted a message pops up as illustra ted in Figure 23 which inform s the user that the re are no more points to delete.

PAGE 62

53 Background: The background color is gray which is not used anywhere else in th e drawing area. Gray is used in almost all 3D viewing software as it is soothing to the eyes and at the same time has good visibility.

PAGE 63

54 Figure 22 Magnified illus tration of a part (e) of Figure 17 showing (a) The legs (three lines white in color converging at the orange point). (b) The projections ( six lines black in color dr awn on the XY, YZ and ZX planes). (c) The helper line for movement along the Y axis (the vertical black colored line ) (d) Selection highlighting i.e. t he selected polyhedr on (magenta colored vertices). (e) The dynamic coordinate display.

PAGE 64

55 Figure 23 A screenshot of Direct3D showing a pop up message displayed when the user has just deleted the last point in the workspace

PAGE 65

5 6 Natural mapping: The horizontal, ver tical and diagonal movement of the mouse is how most users inherently perceive the directions of the X, Y and Z axes respectively to be. Hence this gives a very good mental feedback to the users. True measurement: The orthographic view helps us preserve l ine lengths at varying depths. Hence at all times the user can see and compare the true length of the lines drawn, which is very useful for modeling objects. Window size: The window size of the program varies with the screen resolution. However for any g iven resolution the window size is fixed and cannot be changed so as to preserve the orthographic view and utilize the maximum screen space possible To ensure this the maximize/restore down button and the resizing operations of the window are disabled. I nstant display update: The drawing area is updated immediately and there is unnoticeable lag between an operation/transformation and the screen refresh. Direct interaction: The best feedback of the software is that the users can directly interact with the entities. They can directly select the entity and apply various transformations to it. There is no indirection associated with any of the operations or transformations. 6 .5 Development s ystem s pecifications Direct3D has been developed to work in the Win dows environment. We have used a combination of Visual C++ 6.0 and OpenGL to develop the software. The system we have used for the development of the software has a Pentium III processor with 128 MB RAM. The Operating system used is Windows XP and the reco mmended screen resolution is 1280 x 1024 However t he software has been tested on a range of machines slower and faster and has functioned successfully on all of them.

PAGE 66

57 CHAPTER 7 APPLICATIONS The software we have developed is a prototype with limited uti l ity. However the design offers a lot of potential and with the incorporation of additional functionality i t could be used in a range of applications. A few of the probable application areas are listed below. Architecture: The design can be used to handle the input to software which allow s architects to build virtual building models, preview them and show them to the builders so as give them a clear idea of what they want. Any changes that may be requested can be easily and quickly incorporated. Figure 24 shows the snapshot from one of the popular architectural software, 3D Home Architect Figure 24. A screenshot of 3D Home Architect. Image taken from [49]

PAGE 67

58 Manufacturing: The design can be of great use to mechanical part designers when used in combinat ion with CAD software The combined software will be capable of handling input from the mouse and allow interaction and modeling in a 3D environment. Th us the designers will also be able to build virtual parts quickly and in a single view without the use o f any specialized devices. Figure 25. A screenshot of mechanical CAD software. Image taken from [50] Simulation training: The design can be used in combination with t raining software to simulate various environments Such combined systems could be mad e widely available since the only requirements will be a computer and a mouse. Some of the probable areas that fall under this category are military training, areas of manufacturing industry that are hazard prone, etc. Since this design uses simple 2D loca tor devices such as the mouse, it will allow the users to focus on the environment rather than the device they are using.

PAGE 68

59 Figure 26. An image of a person using simulation training software. Image taken from [51] Medical visualizations: The design can benefit software used in medical visualization. For example: Novice doctors and surgeons can explore the human body and interact with it usin g a specialization of our design and the mouse as an input device. This will help make the software cheaper and eas ier to use Figure 27. A screenshot of medical visualization software. Image taken from [52]

PAGE 69

60 Animation: Interactive animation software can be augmented with our design. The combined system can help animation artists work faster and more productively to develop animations that can be used in numerous areas such as movies, etc. The design can also be used in conjunction with educational software and games to allow users to explore a 3D environment using just the mouse and a few simple movement techniques. Figure 28. A screenshot of animation software. Image taken from [53] Geometric modeling: 3D Geometric modeling can be made simpler and faster to achieve us ing our design T his in turn may have a cascade effect and may benefit many areas which depend heavily on geometric modeling. Figure 29. A screenshot of geometric modeling software. Image taken from [54]

PAGE 70

61 Drawing/Sketching in 3D: Using our design d rawing and sketching in a 3D environment can be made available to novice computer artists where th ey can practice and enhance their skills at no cost. The professional artists can enhance their productivity and eff i ciency since they will be relieved from the burden of having to use specialized devices Figure 30 A 3D sketch. Image taken from [5 5 ] Other research areas: Various areas of research such as genetics and molecular modeling can benefit from the design. Using the design the software currently being used for these applications can be enhanced to allow 3D interaction using inexpensive and p opular 2D locator devices such as the mouse.

PAGE 71

62 Figure 3 1 A screenshot of molecular modeling software. Image taken from [5 6 ] Thus our design can be used as an Input/Output device to supplement software used in numerous important areas. New software can be custom built to be compatible with our design and existing software can be reverse engineered to incorporate the design

PAGE 72

63 CHAPTER 8 EXAMPLE SESSION In this chapter we will be modeling an object using the prototype we have developed. Through th is we a im to highlight the ease, the practicality, the advantages and the capabilities of our design. The object we will be modeling is an airplane. The reason behind the choice is that, an airplane is not very complex to model. Hence with a little practice, eve n novice users can easily model it. At the same time the model demonstrat es the potential held by our design and allows us to exhibit a variety of functionalities supported by the prototype. 8 .1 Modeling t he n ose 1. Start the Direct3D software. 2. The initial s tate of the system will show the Point tab selected i.e. raised with Point plotting as the current mode as indicated by the pushed Plot points button The legs and projection s are turned on and the helpers are turned off by default as reflected by the gr oup of radio buttons. The current location of the cursor is at (100,100,100) a s represente d by the orange colored point. 3. We will be plotting a pyramid i.e. a center lift polyhedron as the nose of the airplane. To do so click on the polyhedron tab to displ ay the buttons related to polyhedron modeling and manipulation. 4. Observe that the polyhedron tab becomes raised indicating that it is selected. 5. Click on the plot center lift polyhedron button. Observe that the button remains pushed indicating that it is cu rrently selected. 6. The status bar also displays the current mode i.e. Center L ift P olyhedron P lotting M ode It also displays a help message to the user explaining to him to the

PAGE 73

64 methodology of working in the center lift polyhedron plotting mode i.e. Plot the base polygon and then move the center point to form a polyhedron . 7. The cursor will be at the location (100,100,100) by default. As a first step, w e want to plot the base o f the pyramid i.e. a square on the plane whose equation is z = 90 and we want t he coordinates of the base vertices to be (100,100,90), (90,100,90), (90,90,90), (100,90,90). 8. To begin we need to move the cursor to (100,100,90). To do so move the mouse diagonally upwards and to the left i.e. in the negative Z direction. 9. Observe that the cursor changes to the Z cursor during the movement along the Z axis and the legs and projections move correspondingly as we move the cursor 10. Keep a check on the dynamic coordinate display and once it shows (100 ,100,90) stop moving the mouse. 11. Observe that the leg on the XY plane has its base at (100,100,0), the leg on the YZ plane has its base at (0,100,90) and the leg on the ZX plane has its base at (100,0,90) clearly indicating the position of the cursor in 3D space 12. The projections of the legs on the XY, YZ and ZX planes also correspond to the current position of the legs 13. Right click to plot a point at the current cursor location. The color of the point just plotted is set to yellow i.e. there is a color change from orange to yellow indicating that the p oint was successfully plotted 14. Now move the cursor in the X direction to reach the location (90,100,90). Again observe that the cursor changes to X cursor during the movement along the X axis and that the legs and projections move correspon dingly. Also not ice that the program automatically draws a line between the current cursor location and the previous point plotted i.e. (100,100,90). 15. Once you have reached the location (90,100,90) plot a point using the right click. 16. Similarly plot the point (90,90,90). 17. Fo r the last point move the cursor to the location (100,90,90) by a procedu re similar to the one followed to plot the earlier three points However to indicate that this is the last point of the base of the poly hedr on, use a left double click instead of a r ight click to plot the point.

PAGE 74

65 18. The program on sensing the double click plots the point and closes the polygon i.e. draws a line between the first and the last point of the square base 19. It also calculates the centroid of the base polygon, plots a point usin g the calculated centroid coordinates and highlights the point as the currently selected point. 20. The dynamic coordinate display the legs and the projections are automatically updated to correspond to the currently selected point i.e. the centroid. 21. The prog ram also automatically changes the mode to point translation mode. 22. Move the current point 1 0 units in the positive Z direction i.e. diagonally downwards and to the right. The point will now ha ve coo rdinates as (95,95,100). 23. Finally we need to position th e n ose in the context of the entire scene. 24. Click on the move polyhedron s button to enter the polyhedron tr anslation mode Select and move the pyramid so that its centroid reflects coordinates of (50,50,92). 25. This is the final position of the nose and is illust rated in Figure 32

PAGE 75

66 Figure 32 A screenshot of Direct3D illustrating the scene after the nose of the airplane has been added

PAGE 76

67 8 .2 Modeling t he f ront f uselage 1. The entire fuselage will be designed as an edge lift polyhedron. 2. Click on the plo t edge lift polyhedron button Observe that the status bar reflects the changed state i.e. Edge Lift Polyhedron Plotting Mode and the help message also changes accordingly i.e. Plot the base polygon and then move the copy of the polygon to form a polyhe dron 3. Plot a square for the base polygon, just as we did for the nose, but using the coordinates (100,100,100), (86,100,100), (86,86,100), (100,86,100) for the vertices. 4. When you have plotted the last vertex you will observe that the program automatically closes the polygon and provides you with a copy of the currently plotted base polygon. 5. It also calculates the centroid of the copied polygon and enters polygon translation mode 6. Move the polygon 20 units in the negati ve Z direction using th e centroid as a handle i.e. till the Z coordinate of the centroid becomes 80. 7. W e have just finished plott ing a parallelepiped whose dimensions are 14x14x20. 8. Now similar to the nose we have to position the front fusela ge in the context of the scene. 9. C lick on the move po lyhedron button to enter the polyhedron translation mode and move the front fuselage till the centroid coordinates reflect (50,50,80). 10. W e are basically going to a lign the nose and all the parts of the fuselage such that the centroids of each lie on the lin e whose equation is x = y = 50 i.e all the centroids have X coordinate as 50 and Y coordinate as 50 with different Z coordinate s 11. Also notice that after we have positioned the front fuselage as describ ed above, the front face of the front fuselage and the ba se of the nose (i.e. the pyramid plotted earlier) both lay on the plane whose equation is z = 90 12. Now we need to modify the front face of the front fuselage so as to coincide its vertices with the vertices of the base of the nose. 13. To achieve this click o n the polygon tab a nd then on the scale polygon button

PAGE 77

68 14. Observe the changes in the tab and the button i.e. polygon tab will be raised and the scale polygon button will be pushed The status bar will also reflect the changed state of the system. 15. Now select the front face of the front fuselage and scale it down along the Y axis such that the line s between its vertices overlap the line s between the vertices of the base of the nose. 16. Similarly scale down the front fuselage front face along the X axis. 17. T he no se a nd the front fuselage should now look like a composite part as illustrated in Figure 33

PAGE 78

69 Figure 33 A screenshot of Direct3D illustrating the scene after the front fuselage of the airplane has been added

PAGE 79

70 8 .3 Modeling t he m iddle f uselage 1. Th is is the easiest part to model 2. Select the polyhedron tab and enter the edge lift polyhedron plotting mode. 3. Plot a parallelepiped exactly similar to the one described for the front fuselage. The only change being this time move the copied polygon 50 unit s in the negati ve Z direction i.e. till the Z coordinate of the centroid becomes 50. 4. After doing so we will have a parallelepiped with dimensions of 14x14x50. 5. Enter the polyhedron translation mode and move this polyhedron till its centroid reflects coordin ates of (50,50,45). 6. After doing so the back face of the front fuselage and the front face of the middle fuselage should overlap and the scene should look as shown in Figure 34

PAGE 80

71 Figure 3 4 A screenshot of Direct3D illustrati ng the scene after the middl e fuselage of the airplane has been added

PAGE 81

72 8 .4 Modeling t he b ack f uselage 1. Enter the edge lift polyhedron plotting mode and m odel a parallelepiped similar to the front fuselage Use the vertices (100,100,100), (90,100,100), (90,90,100) and ( 100,90, 100) for the base polygon. 2. Move the copied polygon 20 units in the negative Z direction to achieve a polyhedron having dimensions 10x10x20. 3. Enter the polyhedron translation mode and move this polyhedron such that its centroid reflect s the coordinates as (5 0,50,10). 4. After doing so t he back face of the middle fuselage and the front face of the back fuselage should lie on the same plane whose equation is z = 20 5. Click on the polygon tab and e nter the polygon scaling mode and scale up the front face of the back fuselage along the X and Y axes such that its vertices coincide with the vertices of the back face of the middle fuselage. 6. The scene should now be as shown in Figure 35

PAGE 82

73 Figure 3 5 A screenshot of Direct3D illustrating the scene after the back fuselag e of the airplane has been added

PAGE 83

74 8 .5 Modeling t he l eft w ing 1. Enter the edge lift polyhedron plotting mode and m odel a parallelepiped using the coordinates (100,100,100), (100,100,85), (60,100,85) and (60,100,100) for the base polygon 2. Move the cop ied polygon 2 units in the negative Y direction to get a polyhedron of dimensions 10x40x2. 3. Move the polyhedron just plotted so that its centroid reflects the coordinates (77,50,62). This will align it at the left side of the middle fuselage. 4. Now click on t he polygon tab and enter the polygon translation mode. 5. Select the face of the left wing which is parallel to the YZ plane and is away from the body of the plane. 6. Move this face such that its centroid reflects coordinates of (97,50,42) i.e. 20 units in the negative Z direction. 7. Now we shall use the line translation mode to fine tune the shape of the wing Click on the line tab and enter the line translation mode. 8. We will be moving an edge of the face we have just translated 9. T he edge to be moved is the one t hat is parallel and nearest to the XY plane. 10. Move this line 5 unit s in the positi ve Z direction. 11. The scene should now look as illustrated in Figure 36

PAGE 84

75 Figure 36 A screenshot of Direct3D illustrating the scene after the left wing of the airplane has b een added.

PAGE 85

76 8 .6 Modeling t he r ight w ing 1. This will be an exact mirror image of the left wing. Enter the edge lift polyhedron plotting mode and model a parallelepiped exactly similar to the one we modeled for the left wing. 2. Move the polyhedron just p lotted so that its centroid reflects the coordinates (23,50,62). This will align it at the right side of the middle fuselage. 3. Now click on the polygon tab and enter the polygon translation mode. 4. Select the face of the right wing which is parallel to the YZ plane and is away from the body of the plane. 5. Move this face such that its centroid reflects coordinates of (3,50,42) i.e. 20 units in the negative Z direction. 6. Again we shall use the line translation mode to fine tune the shape of the wing. Click on the line tab and enter the line translation mode. 7. We will be moving an edge of the face we have just translated 8. The edge to be moved is the one that is parallel and nearest to the XY plane. 9. Move this line 5 unit s in the positive Z direction. 10. The scene should now look as illustrated in Figure 3 7

PAGE 86

77 Figure 3 7 A screenshot of Direct3D illustrating the scene after the right wing of the airplane has been added

PAGE 87

78 8 .7 Modeling t he t ail 1. Enter the edge lift polyhedron plotting mode and m odel a parallelepiped using the coordinates (100, 1 00,80), (100,70,80), (100,70,100) and (100,100,90) for the base. 2. Move the copied polygon 3 units in the negative X direction to get a polyhedron of width 3 3. Enter polyhedron translation mode and move the tail till its centroid reflects the coordinates as (50,70,7). 4. Two of the vertices of the tail base i.e. the face parallel and closest to the XZ plane, should now lie on an edge of the back fuselage. 5. We nee t o make the other two vertices of the tail base i.e. the vertices of the tail farthest from the XY plane to lie on an edge of the back fuselage. 6. We could achieve this using line translation mode, but since we have demonstrated that mode earlier let us use point translation mode this time. 7. Click on the point tab and then on th e Move points button to enter point translation mode. Select the point (49,55,20) and move it 2 units in the positiv e Y direction. Repeat the procedure for the other point (52,55,20) also 8. The scene should now reflect Figure 38

PAGE 88

79 Figure 38 A screenshot of Direct3D illustrating the scene after the tail of the airplane has been added

PAGE 89

80 8 .8 Modeling t he l eft h orizontal s tabilizer 1. Enter the edge lift polyhedron plotting mode and m odel a parallelepiped using the coordinates (100,100,80), (100,100,90 ), (80,100, 100 ) and (80,100,80) for the base. 2. Move the copied polygon 2 units in the negative Y direction to get a polyhedron of width 2 3. Move the polyhedron using the polyhedron translation mode till the centroid coordinates reflect (6 5 ,50, 6 ). 4. Two of the vertices of the face of left horizontal stabilizer which is closest to the back fuselage should now lie on an edge of the back fuselage. 5. To make the other two vertices lie on an edge we need to use the point translation mode 6. Click on the point tab and e nter the point translation mode. 7. Select the point (55,51,20) and move it 2 units in the positi ve X direction. Repeat the procedure for the other point (55,49,20). 8. The scene should now reflect Figure 39

PAGE 90

81 Figure 3 9 A screenshot of Direct3D illustrating the scene after the left stabilizer of the airplane has been added

PAGE 91

82 8 .9 Modeling t he r ight h orizontal s tabilizer 1. This will be an exact mirror image of the left stabilizer. Enter the edge lift polyhedron plotting mode and model a parallelepiped usi ng the coordinates (100,100,100), (100,100,80), (80,100,80) and (80,100,90) for the base. 2. Move the copied polygon 2 units in the negative Y direction to get a polyhedron of width 2. 3. Move the polyhedron using the polyhedron translation mode till the centroi d coordinates reflect (35,50,6). 4. Two of the vertices of the face of righ t horizontal stabilizer, which is closest to the back fuselage, should now lie on an edge of the back fuselage. 5. To make the other two vertices lie on an edge we need to use the point t ranslation mode 6. Click on the point tab and enter the point translation mode. 7. Select the point ( 4 5,51,20) and move it 2 units in the nega tive X direction. Repeat the procedure for the other point ( 4 5,49,20). 8. With this step the model of the airplane is comp lete and should look as illustrated in Figu re 40 T hrough th i s example session we aim to help users to work with the prototype we have developed. At the same time we intend to highlight the potential of the design and portray its advantages.

PAGE 92

83 Figure 40 A screenshot of Direct3D illustrating the scene after the right stabilizer has been added and the finished model of the airplane.

PAGE 93

84 CHAPTER 9 CONCLUSIONS AND FUTURE RESEARCH Though th is research work done to achiev e d irect 3D interaction using a 2D locator device m ay only scratch the surface, we believe that we have put forth a novel and important concept. T he design parameters outlined by us as well as the software prototype a re significant achievements which we hope will generate more interes t in th is d irection The prototype and the example session successfully demonstrate the immense potential held by the approach. We wish to mention that the prototype developed by us, supports only straight line primitives. In the future we want to extend it to support curved primitives as well W e w ould like to develop large scale specialized software on the lines of the prototype with support for many more functionalities A nother major task that demands our consideration is to test the user performanc e when using software developed using our design. Research needs to be done to ascertain user needs as well as to find newer, better and quicker ways of accomplishing fundamental tasks as perceived by the user s A related area that attracts our attention and in which not much research has been done is exploring direct 3D interaction using two 2D locator devices The idea is to use the device in the dominant hand for precision and the device in the non dominant hand for speed.

PAGE 94

85 Although our interface has been evaluated only informally, it holds considerable significance as studies have shown that informal evaluation has often prove n to be very enlightening.

PAGE 95

86 REFERENCES [1]. Buxton W. There's more to interaction than meets the eye: Some Issues in Manual Input User Centered System Design : New Perspectives on Human Computer Interaction, Lawrence Erlbaum Associates, Hillsdale, New Jersey, pp. 319 337. [2]. Hand C. A Survey of 3D Interaction Techniques. Computer Graphics Forum, Vol. 16 (5) pp. 269 281, 1997. [3]. Nielso n G.M. and Olson D.R. Direct manipulation techniques for 3D objects using 2D locator devices. Proceedings of 1986 Workshop on Interactive 3D graphics, pp. 175 182, October 198 6 [4]. Hinckley K., Tulio J. et. al. Usability Analysis of 3D rotation techniques. Proceedings of ACM Symposium UIST '97, 1997. [5]. Bowman D., Koller D. and Hodges L. Travel in Immersive Virtual Environments: An Evaluation of Viewpoint Motion Control Techniques. Proceedings of the Virtual Reality Annual International Symposium (VRAIS), pp. 45 52, 1997. [6]. L iang J and Green M. Geometric modeling using six degrees of freedom input devices. Proceedings of 3rd International Conference on CAD & Computer Graphics, Beijing China, pp. 217 222, 1993. [7]. Boritz J. and Booth K.S. A study of interactive 3D point location in computer simulated virtual environment. Proceedings of the ACM Symposium on Virtual Reality software and Technology, pp. 181 187, Lausanne Switzerland,September 1997. [8]. Ware C. and Jessome D.R. Using the bat: A six dimensional mouse for o bject placement. IEEE Computer Graphics and Applications, Vol. 8, pp.65 70.

PAGE 96

87 [9]. Bier E. Skitter s and j ack s: Interactive 3D positioning tools. Proceedings of 1986 Workshop on Interactive 3D graphics, pp. 183 196, October 1986. [10]. Kaufman A., Yagel R. and Bakalas h R. Direct Interaction with a 3D Volumetric Environment. Computer Graphics, 1990 Symposium on Interactive 3D Graphics, Vol. 24, No. 2, pp. 33 34, 1990. [11]. Butterworth J., Davidson A., Hench S. and Olano T.M. 3DM: A three dimensional modeler using a head mou nted display. Proceedings of 1992 Symposium on Interactive 3D Graphics, Cambridge, Massachusetts, March 29 April 1 1992, pp. 135 138. [12]. Cutler L., Frohlich B. and Hanrahan P. Two Handed Direct Manipulation on the Responsive Workbench. Symposium on Interac tive 3D graphics, pp. 107 114, Providence Rhode Island, 1997. [13]. Sachs E., Roberts A. and Stoops D. 3 Draw: A tool for designing 3D Shapes. IEEE Computer Graphics and Applications, Vol. 11, No. 6, pp. 18 26, November 1991. [14]. Branco V., Costa A. and Ferreira N Sketching 3D models with 2D interaction devices. Computer Graphics Forum, Proceedings of Eurographics '94 Vol. 13(3), pp. 489 502, 1994. [15]. Zeleznik R., Herndon K. and Hughes J. SKETCH: An Interface for Sketching 3D Scenes. Computer Graphics, SIGGRAPH '96 Conference Proceedings, pp. 163 170, 1996. [16]. Shoemake K. ARCBALL: A user interface for specifying three dimensional orientation using a mouse. Proceedings of Graphics Interface pp. 151 156, 1992. [17]. Shoemake K. Arcball rotation control. Graphics Gems IV, pp. 175 192, Academic Press, 1994. [18]. Chen M., Mountford and S. Sellen A. A study in Interactive 3 D rotation using 2 D control devices. Computer Graphics, ACM SIGGRAPH '98, Vol. 22, pp. 121 129, 1998.

PAGE 97

88 [19]. Zeleznik R., Herndon K., Robbins D. et. al. An Interacti ve 3D Toolkit for Constructing 3D Widgets. Computer Graphics, SIGGRAPH '93 Proceedings, Vol. 27, pp. 81 84, 1993. [20]. Zeleznik R.C. Forsberg A.S. and Strauss P.S. Two pointer input for 3D interaction. Proceedings of 1997 Symposium on Interactive 3D Graphics, Providence Rhode Island, April 1997 pp. 115 120 [21]. Buxton W., and Myers B.A. A study in two handed input. Proceedings of CHI, pp. 321 326. [22]. Hanson A.J. and Wernert E.A. Constrained 3D navigation with 2D Controllers. Proceedings of IEEE Visualization '97 pp. 175 182, 1997. [23]. MacKenzie I.S. Input Devices and Interaction Techniques for Advanced Computing. Virtual Environments and Advanced Interface Design, pp. 437 470. Oxford U.K., Oxford University Press, 1995. [24]. Card S. Mackinlay J. and Robertson G. The d esign space of input devices. Proceedings of CHI pp. 117 124, April 1990. [25]. Card S. Mackinlay J. and Robertson G. A Morphological Analysis of the Design Space of Input Devices. ACM Transactions on Information Systems, Vol. 9, No. 2, pp. 99 122, 1991. [26]. Z hai S. User Performance in Relation to 3D Input Device Design. Computer Graphics, Vol. 32, No.4, pp. 50 54, November 1998. [27]. Lipscomb J.S. and Pique Analog Input Device Physical Characteristics. M.E. ACM SIGCHI Bulletin, Vol. 25, No. 3, pp. 40 48, July 1993 [28]. Venolia D. Facile 3D direct manipulation. Proceedings of INTRECHI Conference on Human Factors in Computing Systems, pp. 31 36.

PAGE 98

89 [29]. Balakrishnan R., Baudel T., Kurtenbach G. and Fitzmaurice G. The Rockin' Mouse: Integral 3D Manipulation on a plane. Proce edings of ACM CHI '97 Conference on Human Factors in Computing Systems, pp. 311 318, 1997. [30]. Sturman D. and Zeltzer D. A Survey of Glove based Input. Computer Graphics and Applications, Vol. 14, No. 1, pp. 30 39, 1994. [31]. Foley J.D., Wallace V.L. and Chan P. The human factors of computer graphics interaction techniques. IEEE Computer Graphics and Applications, Vol. 4, No. 11, pp. 13 48, 1984. [32]. Malheiros M., Fernandes F. and Wu Shin Ting. MTK: A Direct 3D Manipulation Toolkit. SCCG '98, Bratislava, Slovak Repub lic, February 1998. [33]. Bier E. Snap Dragging in three dimensions. Proceedings of the 1990 Symposium on Interactive 3D Graphics, pp. 193 204 1990. [34]. Bowman D.A. Interaction Techniques for Common Tasks in Immersive Virtual Environments: Design, Evaluation and Application. Doctoral Dissertation, Georgia Institute of Technology, 1999. http://p eople.cs.vt.edu/~bowman/thesis/ [35]. Haase H., Gobel M. et. al. How Scientific Visualization can benefit from Virtual environments. CWI Quarterly, The Netherlands, 1994. [36]. Herndo n, K.P., van Dam, A., and Gleicher, M. Workshop report: The challenges of 3D interaction. CHI '94 Human Factors in Computing Systems, ACM, May 1994, pp. 469. [37]. Houde S. Iterative Design of an Interface f or Easy 3 D Direct Manipulation. Proceedings of C HI '9 2, pp. 135 142. [38]. Kabbash P., Buxton W. and Sellen A. Two handed input in a compound task. Human Factors in Computing Systems, CHI '94 Conference Proceedings, ACM Press, pp. 417 423, 1994.

PAGE 99

90 [39]. Kabbash P., MacKenzie S. and Buxton W. Human performance using comp uter input devices in the preferred and non preferred hands. Human Factors in Computing Systems, INTERCHI '93 Conference Proceedings, ACM Press, pp. 474 481, 1993. [40]. Nagendra V. and Gujar U. G. 3 D Objects from 2 D Orthographics Views A Survey. Computer & Graphics, Vol. 12, No. 1, pp. 111 114, 1988. [41]. Poupyrev I., Weghorst S. et. al. A Study of Techniques for Selecting and Positioning Objects in Immersive Virtual Environments: Effects of Distance, Size and Visual Feedback. Proceedings of ACM CHI 98, 1998. [42]. Wang W. and Grinstein G. A Survey of 3D Solid Reconstruction from 2D Projection Line Drawings. Computer Graphics Forum, Vol. 12 (2) pp. 137 158, 1993. [43]. Wesley M. and Markowsky G. Generation of Solid Models from Two Dimensional and Three Dimensional Data Solid Modeling by Computer: From Theory to Application. Plenum, New York, pp. 23 51, 1986. [44]. Zhai S. Ph.D. Thesis: Human Performance in Six Degrees of Freedom Input Control. University of Toronto, 1995. http://vered.rose.utoronto.ca/people/shumin_dir/pape rs/PhD_Thesis/top_page.html [45]. Data Visualization Research Lab, Center for Coastal and Ocean Mapping, Joint Hydrographic Center at the University of New Hampshire. http://www.ccom .unh.edu/vislab/Navigation.html [46]. Immersion Corporation http://www.immersion .com/default.shtml [47]. Simulation and Visualisation Research Group, Department of Computer Science at the Univ ersity of Hull http://www2.dcs.hull.ac.uk/simmod/Technology/HMD.htm [48]. Simulation and Visualisation Research Group, Department of Computer Science a t the University of Hull http://www2.dcs.hull.ac.uk/simmod/Technology/data_gloves.htm

PAGE 100

91 [49]. CADinfo.net. http://www.cadinfo.net/editorial/3dhome.htm [50]. Jefferson lab computing. http://cc.jlab.org/cad/ [51]. The Boeing Company http://www.boeing.com/news/releases /2000/photorelease/photo_release_000621n.h tm [52]. Research of Lixu Gu. http://www.imaging.robarts.ca/~lgu/research_E.html [53]. http://www.vobis.de/bbs/support/brett40/ [54]. TGS Visual Concepts. http://www.tgs.com/index.htm?pro_div/ solid_viz_main.htm~main [55]. KLC Sch ool of Design http://www.klc.co.uk/Open/StudentsWork/5 StudentExample.htm [56]. FrontierNet. http://www.frontiernet.net/~imaging/molecular_modeling.html

PAGE 101

92 APPENDICES

PAGE 102

93 APPENDIX A MAPPING ALGORITHM while left mouse button is pushed and held down { record mouse position if we have we recorded three or more positions calculate the slope of the line from the current position to the last position and store as a calculate the slope of the line from the current position to the second last position and store as b calculate the slope of the line from the last position to the second last position and store as c based on the values of a,b,c determine if mov ement is horizontal if movement is towards right increase X coordinates else if movement is towards left decrease X coordinates else if movement is vertical if movement is upwards increase Y coordinates else if movement is downwards decrease Y coordinates else if movement is diagonal if movement is downwards and towards the left increase Z coordinates else if movement is upwards and towards the right decrease Z coordinates store the last position as the second last position recorded store the current position as the last position recorded }