This is like something straight out of Cyberpunk 2077 - the braindances investigation scenes.
More like the opposite. Point cloud data captured with varying means has existed for a long time with raw data visualized more or less just like this. And SciFi movies/games use the effect of raw visualization as something futuristic/computer tech looking. Just like wireframe on black background, although that one is getting partially downgraded to more retro scifi status since drawing 3d wireframe isn't hard anymore. It started when any 3d computer graphics even basic wireframe was futuristic and not every movie could afford it, with some of them faking it with analog means. Any good scifi author takes inspiration from real world technology and extrapolate based on it, often before widespread recognition of technology by general population. Once something reaches the state of consumer product beyond just researchers and trained professionals, the visuals tend to get more polished and you loose some of the raw, purely functional, engineering style.
It reminds me of that as well.
What is the actual objective of this, is it solving an issue or creating a solution to a problem, that is still to be determined? It seems like a lot of energy to replicate a lidar mapping system. It's not like you can expect accurate dimensions from this approximate guess work, excluding the expected hallucinations adding to inaccuracy.
Video cameras are much cheaper and easier to use than LIDAR, like anyone can just pull out their phone, take a video and send it to this algorithm to get a reasonable point cloud of the environment. Sure, if you want an exact model of an environment and you have the time and money, LIDAR would give better results, but this is about doing more with less
3D reconstruction of old spaces which no longer exist seems like a clear use case to me. There's loads of old videos of driving down a street in the 80s, or neighborhoods in cities which got replaced.
I can imagine future iterations of this which bring together other stills of the same space at that time to augment the dataset. Then perhaps another pass to fill in gaps with likely missing content based on probability or data from say the same street 10 years later.
It won't be 100% real, but I think it'd be very cool to be able to have a google-street view style experience of areas before google street view existed.
> it'd be very cool to be able to have a google-street view style experience of areas before google street view existed.
Now do Kowloon Walled City.
N00b question from me, perhaps, but how easy is it to mount and run Lidar on aerial drones?
It's easy but it's not cheap. Well, price is relative but capturing video is certainly cheaper.
Also, I am not sure how heavy LIDAR units are, but remember that the heavier the payload the more the flight time is reduced. Some drones can only have a single payload, so if you also want to capture (high-res) video/imgs you need to fly again.
It all depends on the use-case.
The most available lidar is found on your iPhone, but the results are orders of magnitude less detailed than that derived from photogrammetry. How ever an advantage is that lidar is not confused by reflections.
Huh? LIDAR absolutely is confused by reflections. Not always the reflections you can see (because often it’s using IR wavelengths) but nonetheless, reflections.
Very cool. Doesn't seem like they've actually released the code:
> This is a reimplementation of LoGeR; complete code and models will be released upon approval.
I don't understand why it's a reimplementation either?
I would guess it's "research" code anyway so not really usable unless you are an expert.
Very interesting paper. I can see street-view using it to perfect the 3D analysing of the photo-video they catch with there google-car. What a wonderfull time we are living in ! Specificaly in the Video to 3D reconstruction. Every month, a new brick is put in place.Super
Street View cars added Velodyne LiDAR around 2017 [0][1], but it's optional. I found no data on 'LiDAR vs image only'-percentage.
[0] https://arstechnica.com/gadgets/2017/09/googles-street-view-...
Truly don't understand what is happening in the heads of these researchers. Can't they see how the main use of this is going to be mass surveillance?
These seems to be much more robotics / autonomous vehicle focused? I don't quite see the mass surveillance angle you get from this you don't already get from cheap ubiquitous cameras, basic computer vision and networking (aka flock) .
I think you've made the erroneous assumption that the researchers care. I work in 3D reconstruction and I've not really seen too many people care about the actual use case, and indeed have had some friends join defence.
I mean, i think if you want to perform mass surveilance, you can do it far cheaper and more efficiently via facial recognition, mobile phone surveillance and a variety of different other methods.
If you want reconstruction and training of robotic movement, this is far more appropriate. I believe we're going to see robots being able to "dream" in terms of analysing historical video information on spaces and improving movement and navigation.
So not mass surveilance, but probably there's a future of mass subjugation using robot enforcement.
I'm not sure what you mean. The input video feed already constitutes "surveillance". You'd need cameras everywhere and if you have a camera, you can also just use regular models like China already does.