By
Paula ParisiOctober 8, 2024
Apple has released a new AI model called Depth Pro that can create a 3D depth map from a 2D image in under a second. The system is being hailed as a breakthrough that could potentially revolutionize how machines perceive depth, with transformative impact on industries from augmented reality to self-driving vehicles. “The predictions are metric, with absolute scale” without relying on the camera metadata typically required for such mapping, according to Apple. Using a consumer-grade GPU, the model can produce a 2.25-megapixel depth map using a single image in only 0.3 seconds. Continue reading Apple Advances Computer Vision with Its Depth Pro AI Model
By
Debra KaufmanMarch 4, 2020
Facebook’s 3D Photos feature — which uses depth data to create images that can be examined from different angles via virtual reality headsets — is now available on any of the latest handsets with a single camera, including Apple iPhone 7 or higher or any midrange (and above) Android phone. According to Facebook, the latest in machine learning techniques has made this feature possible. The company first unveiled 3D Photos in late 2018, when it required either a dual-camera phone or a depth map file on the desktop. Continue reading Facebook’s 3D Photos Now Available for All Latest Handsets
By
Meghan CoyleJune 19, 2014
Google’s Project Tango is developing 3D smartphones and tablets that can not only render locations and objects, but can also be used to record 3D images and videos. Mantis Vision’s technology, which will be used in the Google Project Tango devices, creates a depth map of a scene so that users can view an image from different perspectives and add different backgrounds and other 3D visual effects. Other electronics companies are investing in the Mantis technology as well. Continue reading Google 3D Smartphones Will Run on Mantis Vision Technology