Wednesday, March 8, 2017 at 12:00:00 AM Sprint Accelerator 210 W. 19th Terrace, Kansas City, MO, Kansas City, MO
Join the KCMLG for a presentation by Nvidia on the groundbreaking improvements in Machine Learning through the use of GPUs.
Data scientists in both industry and academia have been using GPUs for machine learning to make groundbreaking improvements across a variety of applications including image classification, video analytics, speech recognition and natural language processing. In particular, Deep Learning – the use of sophisticated, multi-level "deep" neural networks to create systems that can perform feature detection from massive amounts of unlabeled training data – is an area that has been seeing significant investment and research.
Although machine learning has been around for decades, two relatively recent trends have sparked widespread use of machine learning: the availability of massive amounts of training data, and powerful and efficient parallel computing provided by GPU computing. GPUs are used to train these deep neural networks using far larger training sets, in an order of magnitude less time, using far less datacenter infrastructure. GPUs are also being used to run these trained machine learning models to do classification and prediction in the cloud, supporting far more data volume and throughput with less power and infrastructure.
Early adopters of GPU accelerators for machine learning include many of the largest web and social media companies, along with top tier research institutions in data science and machine learning. With thousands of computational cores and 10-100x application throughput compared to CPUs alone, GPUs have become the processor of choice for processing big data for data scientists.
0 Response to "March 8: Machine Learning Kansas City - How to use GPUs for Machine Learning, Deep Learning & AI presented by Nvidia."
Post a Comment