MENLO PARK, Calif., June 15, 2020 (GLOBE NEWSWIRE) -- Helm.ai, a developer of next-generation AI software, today announced a breakthrough in unsupervised learning technology. This new methodology, called Deep Teaching, enables Helm.ai to train neural networks without human annotation or simulation for the purpose of advancing AI systems. Deep Teaching offers far-reaching implications for the future of computer vision and autonomous driving, as well as industries including aviation, robotics, manufacturing and even retail.
Artificial intelligence, or AI, is commonly understood as the science of simulating human intelligence processed by machines. Supervised learning refers to the process of training neural networks to perform certain tasks using training examples, typically provided by a human annotator or synthetic simulator to machines to perform certain tasks, while unsupervised learning is the process of enabling AI systems to learn from unlabelled information, infer inputs and produce solutions without the assistance of pre-established input and output patterns.
Deep Teaching is the next cutting-edge development in AI. It enables Helm.ai to train neural networks in an unsupervised fashion, delivering computer vision capabilities that surpass state-of-the-art performance with unprecedented development speed and accuracy. When applied to autonomous driving Deep Teaching allows Helm.ai to train on vast volumes of data more efficiently without a need for large scale fleets nor armies of human annotators, edging closer to fully self-driving systems.
For example, as the first use-case of Helm.ai’s Deep Teaching technology, it trained a neural network to detect lanes on tens of millions of images from thousands of different dashcam videos from across the world without any human annotation or simulation. The resulting neural network is robust out of the box to a slew of corner cases well known to be difficult in the autonomous driving industry, such as rain, fog, glare, faded/missing lane markings and various illumination conditions. As a sanity check, using this neural network, Helm.ai has topped out public computer vision benchmarks with minimal engineering effort.
In addition, Helm.ai has built a full stack autonomous vehicle which is able to steer autonomously on steep and curvy mountain roads using only one camera and one GPU (no maps, no Lidar and no GPS), without ever training on data from these roads and, performing well above today’s state of the art production systems. Since then, Helm.ai has applied Deep Teaching throughout the entire AV stack, including semantic segmentation for dozens of object categories, monocular vision depth prediction, pedestrian intent modeling, Lidar-Vision fusion and automation of HD mapping. Deep Teaching is agnostic to the object categories or sensors at hand, being applicable well beyond autonomous driving.
Helm.ai has very quickly achieved numerous breakthroughs in autonomous driving technologies, producing systems that offer higher levels of accuracy, agility and safety, and solving corner cases at a small fraction of the cost and time required by traditional deep learning methods.
“Traditional AI approaches that rely upon manually annotated data are wholly unsuited to meet the needs of autonomous driving and other safety-critical systems that require human-level computer vision accuracy,” said Helm.ai CEO Vlad Voroninski. “Deep Teaching is a breakthrough in unsupervised learning that enables us to tap into the full power of deep neural networks by training on real sensor data without the burden of human annotation nor simulation.”
Helm.ai Video Resources:
● Helm.ai Intro Video:
https://youtu.be/9ezWa-uqUcY
● Page Mill Road In-Cabin Footage:
https://youtu.be/qPcvWBW_IUY
● Helm.ai Deep Teaching Clip 1:
https://youtu.be/nLHoU31DnKg
● Helm.ai Deep Teaching Clip 2:
https://youtu.be/rCWcTIVBpSY
A major limitation with existing AV approaches is safety, which poses a very real concern when using traditional AI approaches for autonomous driving, which are highly capital inefficient and unable to result in robust AI systems that are prepared to interpret every potential scenario with human level accuracy, even on budgets of billions of dollars.
Whereas AI systems that don’t physically interact with the world, for instance those designed to inspect products for defects or search the internet, can operate with a success rate of only 90-99% without serious consequences, human lives lie in the balance with self-driving vehicles and any system performing at less than 99.999999% accuracy could be catastrophic. These stringent safety requirements and the limitations of traditional AI approaches prevent the mass deployment of self-driving vehicles. Deep Teaching tackles the core safety issue head on by allowing economical training on huge datasets of images and other sensor data, providing a substantial advancement in the autonomous driving industry.
"Helm.ai's self-driving technologies are uniquely suited to deliver on the potential of autonomous driving," said Quora CEO Adam D'Angelo. "I look forward to the advances the team will continue to make in the years to come and am excited to have invested in the company."
While Helm.ai is currently applying its technology to the development of its L2+ and L4 autonomous driving software, Deep Teaching offers promising developments for the future of artificial intelligence and computer vision at large. Industries such as aviation, robotics and medical imaging are among just a few areas that Deep Teaching can help revolutionize.
About Helm.ai
Helm.ai is building the next generation of AI technology for automation. Founded in November 2016 in Menlo Park, the company has re-envisioned the way neural networks learn to understand the real world to make AI-based applications cost-effective, scalable, and profoundly powerful. For more information on Helm.ai, including its products, SDK and open career opportunities visit www.helm.ai or connect with Helm.ai on LinkedIn.
Media Contact
Please reach out to Vanessa Camones at vanessa@anycontext.com or (510) 999-4383 with any questions or inquiries.
Videos accompanying this announcement are available at:
https://www.globenewswire.com/NewsRoom/AttachmentNg/3ba36f16-3311-4d75-a81e-d5a35fcbee38
https://www.globenewswire.com/NewsRoom/AttachmentNg/b61f963a-0311-480f-88bf-e85200b91166
https://www.globenewswire.com/NewsRoom/AttachmentNg/d572f5a9-6a73-4ed0-a223-5504206ea3ec
https://www.globenewswire.com/NewsRoom/AttachmentNg/6035e74d-3094-4c53-9767-81c277b96df2