International Journal of Scientific & Engineering Research, Volume 4, Issue 6, June 2013

ISSN 2229-5518

Survey on 3D User Interface for Operating

System

Ankit Vani, Humayun Mulla, Ronit Kulkarni, Siddharth Kulkarni, Prof. Rahul Kulkarni

2099

Abstract— With the advent of 3D capable displays, we have seen a plethora of 3D movies and games coming up. However, these displays also open up a huge range of possibilities for the way people interact with computers. The primary operating systems of the traditional PC does not make use of the 3D capabilities of the display device. Our project is to bring 3D capabilities to the user interface of the operating systems.

We will do this by implementing patches to the display servers and window managers in Linux, to provide windows that appear as 3D planes, with depth depending on when the window was last focused. Window controls can also be 3D. Apart from the 3D display of the operati ng system, we need a user interface that feels natural with such an interface. We will provide a touch input in 3D space. Users will be able to touch window controls by moving their fingers over the perceived positions of the 3D images. This will be made possible by sensors detecting the user’s hands and the distance of his eyes from the screen.

Index Terms 3D, Anaglyph, Stereo image, Shallow-Depth 3D, Gestures, Gesture Recognition, Kinect

—————————— ——————————

1 INTRODUCTION

o provide an intuitive natural interface for users to interact with computers, through gestures and operating

2 EXISTING SYSTEM

directly on perceived 3D objects on the 3D display to drive
the operating system. A common interface must be provided
for platform-independency and hardware abstraction. Users
must be able to use the 3D UI interface on different OS’s using
technology they prefer to implement the 3D output and gesture recognition. A new manner of interacting with windows and controls must be introduced, suited particularly to 3D user interface, such as more natural gestures for actions such as minimizing, restoring and closing and resizing

2.1 Anaglyph


Fig. 1: Anaglyph Glasses
windows. The system will be primarily developed and tested
on a Linux based operating system, as it is open source
technology as easy to patch. However, the platform-
dependent modules and the common API’s will be separate
such that porting to a different operating system is easy and convenient. The system should be portable to various platforms with appropriate 3D display drivers and sensor drivers. The API to interface with the 3D UI system should be common on all the platforms. The system should be independent on the 3D display technology or the sensors used. To enrich interaction with digital tables, we present the concept of shallow-depth 3D [5] – 3D interaction with limited depth. Within this shallow-depth 3D environment several common interaction methods need to be reconsidered. Starting from any of one, two and three touch points, we present inter- action techniques that provide control of all types of 3D rotation coupled with translation (6DOF) on a direct-touch tabletop display.

————————————————

A. Vani is a student at Sinhgad Institute of Technology, Lonavala, MH, India ( phone: 9561328090, email: a@nevitus.org )

H. Mulla is a student at Sinhgad Institute of Technology, Lonavala, MH, India ( phone: 9766120259, mail: humayun.mulla@gmail.com )

R. Kulkarni is a student at Sinhgad Institute of Technology, Lonavala, MH, India ( phone: 9405417100, email: ronit@nevitus.com )

S. Kulkarni is a student at Sinhgad Institute of Technology, Lonavala, MH, India ( phone: 9922928466, email: siddharthrkulkarni@gmail.com )

R. Kulkarni is a professor at Sinhgad Institute of Technology, Lonavala,

In anaglyph, [1] there are two images are projected for each object. The two images are individually colored (typically red and cyan) and then superimposed as a single image. Through the use of similarly colored filters in the glasses, each eye sees only its correct image. One image of object is represented in red color and the other one is represented in cyan color. Red and cyan film in the glasses cancels the filter color to get appropriate images. The technique called Anachrome uses little more transparent filter in the glasses.

2.2 Gesture Recognition


Fig. 2: Gesture Recognition
Gesture is the way to express feelings or information
through human body. Gesture recognition is the process of
detecting patterns of hand movement to act upon any event.
Gestures are expressive, [2]meaningful body motions
involving physical movements of the fingers, hands, arms, head, face, or body with the intent of: 1) conveying
meaningful information or 2) interacting with the environment.

MH, India ( phone: 9960746787, email: kulkarnirahul1@gmail.com )

IJSER © 2013

http://www.ijser.org

International Journal of Scientific & Engineering Research, Volume 4, Issue 6, June 2013

ISSN 2229-5518

3 PROPOSED SYSTEM

2100


Fig. 3: 3D User Interface for Operating System
The system will take advantage of the 3D display technology to display the operating system in 3D. It keeps track of the depths of windows to display. This tracking includes lower depth for recently active windows and higher for least recently activated windows and depths to window
Integral to the viewing of 3D content on display systems is the computation of an appropriate offset of two images in each stereo pair.
In order to generate 3D stereo image pairs with adequate parallax [3] – enough parallax to adequately convey a representative sense of depth; the viewpoints of each image must be positioned in such a way that they will ultimately be at an appropriate distance from the projection plane – the assumed viewing distance.

5 COMPUTATION FOR 3D TECHNOLOGY

The distance of the far objects and the near objects will be computed automatically for each scene and will be adjusted to insure that there are no drastic changes in scene geometry from one scene to the next,

� 𝑥 𝑆�

controls. The system helps to interact with the 3D objects by

� = [

] + 𝑆�

touching the objects in 3D space and no touchscreen required.
It will detect 3D movement and recognition of hands and gestures and also handle common window tasks such as

� = [

� 𝑥 𝑆�

� ] + 𝑆�

Minimize, Restore, and closing using gestures. The system will prepare a platform for 3D entertainment making possible

(� − � )/2 = 2 ∗ � tan[(𝑆�/2)/(� − �′ )]

� 𝑥 � 𝑙

better interactions for 3D Games with 3D inputs, 3D Movies, and 3D Movies with user interaction.
The system make you feel that you are not only viewing
Where:

𝑆� = 𝑑 [� − � ] �′

Operating System in 3D space but also interacting with it in
3D space

4 3D TECHNOLOGY


Fig. 4: The mathematical relationships of stereo base (Sb) to stereoscopic eye points, stereoscopic object space, and the 3D- stereo display surface
T1 and T2 are similar triangles
T3 and T4 are similar triangles

f = ~ 22.22 mm (lens to retina distance)

a’ = viewing distance based on screen size

an = a’ for zero parallax

(bf – bn ) / 2 = left/right eye image offset

af is derived from current scene content

6 GESTURE RECOGNITION


Fig. 5: Global Hand Posture Detection and Recognition
Diagram
Above figure shows an overview of our hand posture recognition framework, which contains two major modules [4]: (i) user hand posture location; and (ii) user hand

IJSER © 2013

http://www.ijser.org

International Journal of Scientific & Engineering Research, Volume 4, Issue 6, June 2013

ISSN 2229-5518

2101

posture recognition. A low cost computer vision system that can be executed in a common PC equipped with an USB web cam is one of the main objectives of our approach. The system should be able to work under different degrees of scene background complexity and illumination conditions, which shouldn’t change during the execution.

7 PROCESS FOR GESTURE RECOGNITION

The following processes compose the general framework:
1. Initialization: the recognizable postures are stored in a
visual memory which is created in a start-up step. In order to configure this memory, different ways are proposed.
2. Acquisition: a frame from the webcam is captured.
3. Segmentation: each frame is processed separately
before its analysis: the image is smoothed, skin pixels are labeled, noise is removed and small gaps are filled. Image edges are found, and finally, after a blob analysis, the blob which represents the user’s hand is segmented. A new image is created which contains the portion of the original one where the user’s hand was placed.
4. Pattern Recognition: once the user’s hand has been
segmented, its posture is compared with those stored in the system’s visual memory (VMS) using innovative Hausdorff matching approach.
5. Executing Action: finally, the system carries out the corresponding action according to the recognized hand posture.

8 FUTURE RESEARCH

Future research on this topic will consider occlusion more complex primitives and display quality. Future research will also consider new 3D technology algorithms, complex gestures, and perfect combination new 3D technology and complex gesture

9 SCOPE

This innovation can be used in the future in various fields like Medical, Educational, and Entertainment etc.

10 CONCLUSION

The system will help to provide not only just 3D view but also 3D interaction. The system will give convenience for operating system. Most algorithms usually ignore the problem of occlusion due to the associated computational complexity. However, recently approximate algorithms for similar problems in computer graphics have been proposed. We plan to investigate whether any of these can be adapted to suit Computer Graphics rendering. We would also like to look into using of more complex primitives than points to represent objects. Finally, we plan on researching on the possibilities of our proposal with the other motion sensing input devices such
as Kinect.

ACKNOWLEDGMENT

We would like to thank our reviewers for their thoughtful comments and suggestions.

REFERENCES

[1] Simon Reeve & Jason Flock ―Basic Principles of Stereoscopic 3D‖ 2010

[2] Sushmita Mitra , Tinku Acharya ―Gesture Recognition: A Survey‖ IEEE,

2007

[3] Barry Bitters ―ADVANCES IN DESKTOP 3D-STEREOSCOPIC VISUALIZATION OF GEOSPATIAL DATA‖ ISPRS Technical Commission IV & AutoCarto, 2010

[4] Elena Sánchez-Nielsen, Luis Antón-Canalís, Mario Hernández-Tejera

―Hand Gesture Recognition for Human-Machine Interaction‖ Journal of

WSCG, 2003

[5] Mark Hancock, Sheelagh Carpendale, Andrew Cockburn ―Shallow-Depth

3D Interaction: Design and Evaluation of One-, Two- and Three-Touch

Techniques‖ ACM Press, 2007

IJSER © 2013

http://www.ijser.org