Master Thesis
My master thesis started 11-11-2013. I'll be doing my research at the company VanRiet, situated in Houten. The research will take about 30 weeks or longer. Click the VanRiet logo to check out their website.
History - Quality & Efficientcy
I have worked several years at VanRiet as a part timer. In the summer of 2011 I started my first programming project for the service department. The service department inspect the systems of our customers. Practically they checked the entire system and wrote down any deviations. Than after a few weeks a report was written and send to the costumer.
To improve the quality of the inspections every part of the system should be checked according to a standard, also we would like to know which parts were approved during the inspection. To do this I introduced a tablet application which is easy to use during the inspection. Also I created a reporting tool to generate reports from the digital inspections, thus improved the efficiency as well.
The several applications needed where called the Service Inspection Suite or SIS.
To improve the quality of the inspections every part of the system should be checked according to a standard, also we would like to know which parts were approved during the inspection. To do this I introduced a tablet application which is easy to use during the inspection. Also I created a reporting tool to generate reports from the digital inspections, thus improved the efficiency as well.
The several applications needed where called the Service Inspection Suite or SIS.
SIS 2.0 - User Interaction & 3D Models
With the project finished earlier this year, we decided it was time for the real deal by improving and expanding the project. So this project is called SIS 2.0 and is in progress as we speak. The major improvements are in terms of user interaction. For example a 3D view has been implemented, like shown in the image below.
Call of the industry
![AR](/uploads/1/3/7/6/13760150/3017098.png?420)
The service engineers have to use the tablet to do inspection tasks nowadays. However a tablet is not that user friendly. When they want to take a picture or are lying under a conveyor belt wanting to report, they need to get up and grab the tablet. It would be much easier for them to have the interface projected in front of them in a way they don't need to hold the device in their hands.
Augmented reality is the solution here. What we want is what I discussed in the augmented reality section right HERE. Fixing this will be a huge contribution to the industry.
Augmented reality is the solution here. What we want is what I discussed in the augmented reality section right HERE. Fixing this will be a huge contribution to the industry.
Augmented reality
Several ingredients are needed to be able to create an augmented reality system. Like I explained in the Back to Reality section those are:
We need to know the exact position and orientation of the Oculus Rift's displays in the real world to correctly merge the virtual world in it.
- A virtual environment
- A device able to display both the real and virtual world
- An absolute positioning system which is super accurate
- A depth measurement system to handle visual occlusions
We need to know the exact position and orientation of the Oculus Rift's displays in the real world to correctly merge the virtual world in it.
Indoor Positioning system
The first constrain on the research will be that the localization only has to work properly indoor. This eliminates large landscape environments with a poor density of landmarks. Properly is not yet defined.
The idea is to create the indoor positioning system (IPS) using the internal measurement unit (IMU) of the Oculus Rift. However drift occurs due to errors in the sensors' measurements. The position is calculated using the previous position, thus the errors accumulate over time causing drift.
During the research I will focus on taking away this drift by calibrating the position using another sensor before the drift is noticed visually. For example I might research the possibilities of pressure sensors, simultaneous localization and mapping (SLAM) using the depth measurements, ramp detection and laser scanners. An important aspect of my research will be brainstorming about possibilities and sharing experiences with other professionals and companies.
The idea is to create the indoor positioning system (IPS) using the internal measurement unit (IMU) of the Oculus Rift. However drift occurs due to errors in the sensors' measurements. The position is calculated using the previous position, thus the errors accumulate over time causing drift.
During the research I will focus on taking away this drift by calibrating the position using another sensor before the drift is noticed visually. For example I might research the possibilities of pressure sensors, simultaneous localization and mapping (SLAM) using the depth measurements, ramp detection and laser scanners. An important aspect of my research will be brainstorming about possibilities and sharing experiences with other professionals and companies.
The most important aspect of the IPS is accuracy. Next to that latency is important, according to the developers of the Oculus Rift. The most ideal case is where all time can be used for rendering the virtual world.
Goal
The goal is to create a wearable absolute indoor positioning system, without external hardware like beacons. This system contributes to science, because knowledge about positioning systems usable for augmented reality application is acquired. For the industry the main contribution is the natural user interface where an augmented reality application using the IPS is developed. Using this application a lot of time and money can be saved when used for service, training and simulation purposes.
For companies those kind of state of the art technological innovations also contribute to the companies image and marketing plans. Who doesn't love the newest gadgets, right?
For companies those kind of state of the art technological innovations also contribute to the companies image and marketing plans. Who doesn't love the newest gadgets, right?
What I Won't do
The thesis focuses purely on the positioning part, thus merging the virtual and real world correctly for visual purposes. I won't consider user interaction, depth/occlusion of the virtual world nor the virtual world in terms of realism, animations and lighting. The virtual world will serve as a prove of concept and will consist of one or more static object(s).
Oculus Rift & stereo camera setupThe Oculus Rift is a virtual reality head mounted display (HMD). The HMD is equipped with an IMU by default, and will be equipped with a stereo camera setup.
To get an impression of the system you can check out the AR-Rift project by William Steptoe. You might also like to check out his demo-video displayed at the right. Start around 7:12. I need to make the Rift work on USB-power/battery as well to be able to carry it around. |
|
inertia-based Navigation
Walking around based on IMU data is great, but the data signals are far from perfect. Therefor filtering is required. Filtering the accelerometer data is one of the important subjects I need to research. From the accelerations, displacements can be calculated using basic physics. You know: v(t) = v(t-1) + a(t) * dt and s(t) = s(t-1) + v(t) * dt. Luckily drift can be estimated with filtering as well. How is yet magic to me, but even if the drift is largely canceled (~90%) by filtering it still has the remaining drift. I need a way to deal with the remaining drift in as least as possible time.
Features versus SLAM
Because using external hardware isn't allowed, a lot of options of aiding systems are eliminated. The two most straight-forward options are feature tracking and SLAM. With feature tracking the displacement of a set of features over time is estimated to estimate the velocity of the cameras. The SLAM approach is awesome, because a virtual representation of the environment is build and the location relative to this environment is known. However there is a huge downside to SLAM, it takes a lot of time! That is why SLAM loses the battle. Other (way cheaper) methods exist to retrieve the relative location to the operating environment, might it be required to be implemented during the project.
Only one question to this feature tracking velocity estimation stuff keeps poking my mind: "How can a 3D velocity vector be accurately estimated from an 2D image?". I saw during my experimentation project with MonoSLAM depth estimation relies on an expensive algorithm. In my mind it seems way better to estimate 3D velocities from a depth image, produced by the stereo camera setup. However I couldn't find any paper about such an approach yet. Might be interesting to look into, don't you think?
Only one question to this feature tracking velocity estimation stuff keeps poking my mind: "How can a 3D velocity vector be accurately estimated from an 2D image?". I saw during my experimentation project with MonoSLAM depth estimation relies on an expensive algorithm. In my mind it seems way better to estimate 3D velocities from a depth image, produced by the stereo camera setup. However I couldn't find any paper about such an approach yet. Might be interesting to look into, don't you think?