CS 6360 Virtual Reality- Assignment 3
Zach Gildersleeve
March 2, 2007
Head Tracked Viewing
This assignment attempts to implement head tracked viewing of an OpenGL rendering of a cube. By shifting the tracked position of the viewer's head and eye position, different angles of the cube should be visible. As we shall see, the implementation suffered some setbacks and did not fully work.
The first part of the assignment was to add rotation to the drawing capacity of the cube. This was fully implemented in the previous assignment using three glRotate*() commands that mine the Euler angles from the tracker data and rotate around each respective axis. As previously mentioned, this method of rotation works in almost all situations, but experiences gimbal lock when the tracker is rotated 180 around the global x axis until it is upside down. As this position would be highly unlikely for head tracked rotation, it is ignored.
Using a glut menu attached to the right mouse button, the user can select different trackers to use. The butterfly trackers are the most accurate trackers, while the bar style tracker experiences a fair amount of inertial drift due to the reduced number of acoustic sensors. However, the shape of the bar tracker allows for better measurement of the computer display in tracker coordinates, so it will be used for that purpose below.
Creation of the Viewing Frustum
Additional controls are added to the glut menu, which allows the user to select the lower left (LL) and upper right (UR) corner of the OpenGL window. Using the bar tracker, very precise coordinates can be found. For the purposes of debugging, these coordinates are hard coded in the program for the window's initial position.
The eye position in tracker space are returned via the tracker, and this point is projected on the window by replacing the necessary point from the eye position with the window coordinate, which we assume to be vertical and aligned to a tracker space axis (which it very nearly is). This gives us the coordinates of eye projected. At this point error checking is possible; by aligning the tracker with the window corners, the projected eye measures out the correct 15cm sides of the OpenGL window. Likewise, the distance between the eye and project eye can be found by || E - Ep ||. Error checking for this distance also produces the correct results.
The viewing frustum is set up from the projected eye position. In this implementation, this is done using the projected eye as the center point, and building the frustum at +/- 7.5cm on all sides from the center. This produces a frustum with parameters left, right, bottom, top, near, far (L, R, B, T, N, F) that corresponds to:
((UR.x - Ep.x) / 2.0 , (LL.x - Ep.x) / 2.0 , (UR.z - Ep.z) / 2.0 , (LL.z - Ep.z) / 2.0 , || E - Ep || , 20.0), where the y component in this case is the discarded value during the projection.
This frustum is then transformed so that || E - Ep || lies along the -z axis, with the eye position at 0.0 in OpenGL projection coordinates.
At some step, probably in the construction of the frustum based on the projected eye position and that frustum's translation to OpenGL coordinates, something went wrong. Many different options were tried, but all resulted in typically a distorted frustum, with the cube stretched out along one or several directions, or a cube that was scaled very close to the eye position. The below images represent some of these errors.
Comparison of Real and Virtual Objects
The best results came from when the cube was very large. I was able to move my head (with the tracker representing my eye position), and see down a side of the cube when looking in that direction, as might be expected. This is roughly illustrated in the images below, which represent looking at the cube from a perspective from the left side of the window, and one from the right side of the window.
However, the scale is distorted to such a degree that comparison between a real cube and the OpenGL cube is difficult.
One issue that I was able to identify, despite the implementation not fully working, was the jittery nature of the head tracked movement compared with reality. In reality, our eyes and brain are very successful in smoothing out slight discontinuities and vibration that manifests itself as noise - thus a solid cube appears to be solidly placed in space despite how our eyes might move. Our eyes and the image processing in our brain provides feedback that cancels out this noise. This feedback is missing for the head tracked display, and exacerbated due to the awkward nature of the tracker device. Even with a very solidly mounted head tracker, like a helmet for example, I would expect still a little noise. I think that the eye position in terms of a head mounted tracker would need to be dampened slightly to remove some of this noise.
A better solution would be to track the actual position and lookat point of each eye, and build the frustum from this point. Perhaps all that would be needed would be the physical lookat point, and the tracked head position might be sufficient. This would at least attach the eye to the tracker; using the head as a position for tracking seems like it would be a good idea for identifying position of the user's head, but the actual perspective frustum from each eye seems to require a much more delicate positional measurement than what the current setup offered.