So, What’s It All About, This VR Driver Simulator Training Thing? › Forums › Technical › Merging Live In-Car Head Mounted Camera feed with VR Vision – Must it be Green-Screening?
- February 28, 2017 at 3:00 pm #892
Right now, this is one piece I don’t have a good fix on, but know it is doable. There are two basic issues; 1) can I map VR imagery to spaces in a video overlay (or overlay a masked video feed to VR imagery)? and 2) Can the video feed be in 3D? Separate topics really, but logging thoughts together for now.
What I need to do is have head mounted camera/s so people believably see themselves driving the actual car, and have the VR imagery come in the windows / mirrors as it should. This, combined with geometrically fixed centring of the imaginary forces and having dynamic whole-of-body forces line up with imagery displayed, is what is going to virtually eliminate simulator sickness.
For Q1) Green-screening is a conventional way and is apparently straightforward, I just haven’t checked the details. But it will mean sticking green material to all windows and draping a green sheet behind the seats – not particularly attractive and a little claustrophobic. Maybe this is the domain of prototyping, and actual development funds need to be expended to get something different. But of course I want this thing to be as good as it possibly can be for all areas, so what I am hoping to do is a much tidier than green-screening version. A caveat is I would rather apply known technology in this area than develop it, so want to go with what works.
For Q2) Getting 3D video superimposed seems to be all development work, but Windows Mixed Reality Headsets are shipping as dev kits and may be a way.
Right now, I have a choice to take the live camera feed from the HTC Vive VR headset and superimpose it over the VR generated imagery. This is a mono image, which may be enough, but still needs to be combined with the VR imagery.
I hope to map cab screens / windows and mirrors as displayable areas and then just send the Vive headset’s camera feed to the user and superimpose the augmented reality PC generated image. This approach would use image masking to deal with the bits of the cabin that aren’t there, so the rear windows were defined by a mask and real content coming in there on top of the fake car interior mask. The mask would be generated by taking photos of the interior of an actual car, the same as the vehicle cabin fitted (or not!) and marrying them up with nothing more elaborate than a bit of PaintShop Pro mesh stretching to get an acceptable mask.
If a single camera as feed is not convincing enough, it may be fine for prototyping, but for production roll-out two cameras may be needed, placed at eye distance apart. In this case, I would most likely use the Oculus Rift headset as it is better in other regards, and add the camera pair. Wonder if the cameras out of a couple of old mobile phones could be stuck onto the headset and the output got into an hdmi card, or whatever…. But by the time I get there the mentioned Windows Mixed Reality Headsets may be a viable solution.
I have had a bit of a look round via Google and don’t see anything that would give pointers to the best way to do the tidy version. Whilst I will join in developer forums (and have been on the Oculus one for a while now) to see if there is available knowledge I can access, this whole topic is very important and could easily add a few months to the final prototype development. But the good news is this is all past the “Demo” milestone. So hopefully rapid buy-in to the remaining issues will occur, once people can sit in this thing and drive around a Need for Speed, or other, environment and believe they are really there.June 27, 2017 at 4:57 pm #23578
One prospective solution to this issue is to overlay the actual car interior at a 1:1 scale so there is a VR version of the actual thing a person is sitting in. Then use these Leap Motion http://www.leapmotion.com gizmos to show your actual hands and whatever they are doing will directly affect what you see. Maybe this is a way…? I have a Leap Motion controller coming so can test the theory anyway…
Vive trackers https://www.vive.com/ca/vive-tracker/ are another potentially useful item to track gear lever position in a manual, or even be a way to do “subtractive positioning” if there are issues arising with “frame of reference” that may occur.
This is the joy of prototyping. The answer is “out there” somewhere, just gotta find it and figure out the smartest way to deal with issues arising. And will!!!April 18, 2018 at 1:08 pm #37178
And here it is, coming from this:
By the time we need it for the visual interfacing job we will be able to spatially map the interior of the vehicle and set up our portals to the world. Beautiful. This is how the last tricky looking technical problems will be solved!
At least if the cameras and software are good enough? But there is always next year’s model if not. AR (Augmented Reality) has a lot of pent up demand driving it, so the hardware and software will come….
- This reply was modified 11 months ago by Vince Sunter.
You must be logged in to reply to this topic.