AI FrontiersConference Applied Deep LearningNovember 3-5, 2017, Santa Clara Convention Center
Our Topics
Personal Assistants
The exciting future of chatbots with deep reinforcement learning
Robots
What is the status of robots? How do they become smarter?
Deep Learning Breakthrough
The major breakthroughs in deep learning algorithm and their implications.
Video Analysis
How to apply deep learning to understand videos? What are the advances in computer vision?
Autonomous Driving
A look at future self-driving cars. The role of deep learning in autonomous driving.
Games
The growing presence of deep learning in game play and its impact for future games.
Speakers
Andrew NgFounderDeeplearning.ai
Andrew NgFounderDeeplearning.ai
Day 19:10 - 10:00amAI is the New ElectricitySimilar to the rise of Electricity starting about 100 years ago, AI today is beginning to transform every major industry. This presentation will discuss how AI can transform your business, share major technology trends and thoughts on where your biggest future opportunities may lie, and describe best practices on incorporating AI, machine learning, and deep learning into your organization. These ideas will also be illustrated with some examples that haven't been presented elsewhere before.
Speaker BioDr. Andrew Ng is a globally recognized leader in AI (Artificial Intelligence). He was until recently Chief Scientist at Baidu, where he led the company’s ~1300 person AI Group and was responsible for driving the company's global AI strategy and infrastructure. He was also the founding lead of the Google Brain team. Dr. Ng is also Co-Chairman and Co-founder of Coursera, the world’s leading MOOC (Massive Open Online Courses) platform, and an Adjunct Professor at Stanford University's Computer Science Department. Dr. Ng has authored or co-authored over 100 research papers in machine learning, robotics and related fields. He holds degrees from Carnegie Mellon University, MIT and the University of California, Berkeley.
Xuedong HuangChief Scientist
of Speech & Language
Microsoft
Xuedong HuangChief Scientist
of Speech & Language
Microsoft
Day 17-9pmLessons Learned in Advancing Conversational AssistantsSpeech and language technologies benefited tremendously from the latest progress in machine learning and knowledge engineering. I will share major lessons learned in advancing modern conversational assistants from Cortana to Customer Support Services.
Speaker BioDr. Xuedong Huang is a Microsoft Technical Fellow in Microsoft AI and Research. He leads Microsoft's Speech and Language Team. As Microsoft's Chief Speech Scientist, he pioneered to lead the team achieving a historical conversational speech recognition human parity milestone in 2016. In 1993, Huang joined Microsoft to found the company's speech technology group. As the general manager of Microsoft's spoken language efforts for over a decade, he helped to bring speech recognition to the mass market by introducing SAPI to Windows in 1995 and Speech Server to the enterprise call center in 2004. Prior to his current role, he spent five years in Bing as chief architect working to improve search and ads. Before Microsoft, he was on the faculty at Carnegie Mellon University and achieved the best performance of all categories in 1992’s DARPA speech recognition benchmarking. He received Alan Newell research excellence leadership medal in 1992 and IEEE Best Paper Award in 1993. He is an IEEE & ACM fellow. He was named as the Asian American Engineer of the Year (2011), and one of Wired Magazine's 25 Geniuses Who Are Creating the Future of Business (2016). He holds over 100 patents and published over 100 papers & 2 books.
Alex AceroSr. DirectorApple Siri
Alex AceroSr. DirectorApple Siri
Day 110:00 - 11:30amDeep Learning in SiriSiri brought personal assistants to the mainstream after its introduction in 2011 in the iPhone. Deep learning is powering many components in Siri: trigger word detection, large vocabulary recognition, text-to-speech, machine translation, and natural language understanding. In this talk I will show a few examples of how deep learning is used in Siri.
Speaker BioAlex Acero is Senior Director in the Siri team in charge of speech recognition, speech synthesis, and machine translation. Prior to joining Apple, he spent 20 years at Microsoft Research managing teams in speech, computer vision, NLP, machine translation, machine learning, and information retrieval. Dr. Acero is an IEEE Fellow and ISCA Fellow. He has served as President of the IEEE Signal Processing Society and is currently a member of the IEEE Board of Directors. He is a co-author of the textbook Spoken Language Processing. Dr. Acero has published over 250 technical papers and has over 150 US patents. He received his Ph.D. from Carnegie Mellon in 1990.
Ruhi SarikayaDirectorAmazon Alexa
Ruhi SarikayaDirectorAmazon Alexa
Day 110:00 - 11:30amNatural Language Interaction Challenges for Intelligent Personal AssistantsThere are three fundamental challenges in interacting with the applications and services running behind intelligent personal assistants (IPAs): 1) application/service discovery, 2) learning what these applications can do, and 3) limited information flow into the apps/services. This same challanges manifests itself in all AI enabled systems. For example, users do not know what skills/apps exists that can handle their requests and they also do not know how to interact with those skills/apps in a natural way. Additionally, these systems have limited ability for contextual conversational understanding. We discuss these issues and explain what is needed to truly understand the user’s intent and serve the most relevant answer for the user’s request.
Speaker BioRuhi Sarikaya joined Amazon Alexa team as the Director of Applied Science in 2016. With his team that he has largely built from the ground up, he has been building core capabilities around ranking, relevance, natural language understanding, dialog management, contextual understanding and personalization. Prior to that, he was a principal science manager and the founder of the language understanding and dialog systems group at Microsoft, Redmond, WA between 2011 and 2016. His group has built language understanding and dialog management capabilities of Cortana, Xbox One and the underlying platform supporting both 1st and 3rd parties. Before Microsoft, he was a research staff member and team lead in the Human Language Technologies Group at IBM T.J. Watson Research Center, Yorktown Height, NY for ten years. Before IBM, he worked as a researcher at the Center for Spoken Language Research (CSLR) at the University of Colorado at Boulder for two years. Ruhi received his PhD in Electrical and Computer Engineering from Duke University, his MS in Electrical and Computer Engineering from Clemson University, and his BS in Electrical Engineering from Bilkent University. He has published over 100 technical papers in refereed journal and conference proceedings, and is inventor of over 70 issued/pending patents.
Rahul SukthankarPrincipal ScientistGoogle
Rahul SukthankarPrincipal ScientistGoogle
Day 11:50-3:10pmLarge-Scale Video Understanding: YouTube and BeyondThis talk will present some recent advances in video understanding at Google. It will cover the technology behind progress in applications such as large-scale video annotation for YouTube, video summarization and Motion Stills, as well as our research in weakly-supervised learning, domain adaptation from YouTube to Google Photos and action recognition. I will also give my perspective on promising directions for future research in video.
Speaker BioRahul Sukthankar leads exploratory research efforts in computer vision, machine learning and robotics at Google. He is also an adjunct research professor at the Robotics Institute at Carnegie Mellon and courtesy faculty at the University of Central Florida. Dr. Sukthankar was previously a senior principal researcher at Intel Labs, a senior researcher at HP/Compaq Labs and a research scientist at Just Research. He received his Ph.D. in Robotics from Carnegie Mellon in 1997 and his B.S.E. in Computer Science from Princeton in 1991. He has organized several academic conferences and serves as Editor in Chief of the Machine Vision and Applications journal.
Xiaofeng RenChief ScientistAlibaba iDST
Xiaofeng RenChief ScientistAlibaba iDST
Day 11:50-3:10pmThe Quest for Video UnderstandingIn this talk I will briefly discuss the ubiquitous needs of video and video understanding across Alibaba and the challenges that are being addressed and solved at iDST, Alibaba's AI R&D division. Examples include mobile shopping on Taobao, video search and recommendation on Youku and Tudou, and real-time systems for Cainiao Logistics and City Brain.
Speaker BioXiaofeng Ren is currently chief scientist and deputy dean of the Institute of Data Science and Technologies (iDST) at Alibaba Group. He leads computer vision teams at iDST that work across Alibaba Group's diverse e-commerce businesses, from Taobao and TianMao to Youku, Tudou, Alibaba Cloud, and beyond. For 2013-17, he was a senior principal scientist at Amazon and led algorithm developments of Amazon Go. For 2008-13, he was at Intel Labs Seattle and led computer vision research on RGB-D perception (consumer depth cameras) and wearable cameras. For 2006-08 he was on the research faculty of Toyota Technological Institute at Chicago. He received his Ph.D. from UC Berkeley in 2006. Xiaofeng's research and interests span a range of computer vision topics including local features, boundary detection, optical flow, tracking, object detection and recognition, scene understanding, and activity recognition. He holds an affiliate faculty position at University of Washington.
Danny ShapiroSr. Director of AutomotiveNvidia
Danny ShapiroSr. Director of AutomotiveNvidia
Day 13:50-5:10pmAccelerating the Race to AI Self-Driving CarsAI is transforming industries from consumer services to robotics. The transportation industry is next. As the industry moves from ADAS to the next-generation of self-driving technology, breakthroughs in computing are changing how we interact with vehicles, and enabling them to drive us. Deep learning is the game-changing technology behind all autonomous vehicle development. This session will showcase some of the latest deep learning systems from the data center to the vehicle being development to create safe and secure self-driving vehicles.
Speaker BioDanny Shapiro is senior director of Automotive at NVIDIA, focusing on artificial intelligence (AI) solutions self-driving cars, trucks and shuttles. The NVIDIA automotive team is engaged with over 225 car and truck makers, tier 1 suppliers, HD mapping companies, sensor companies and startup companies that are all using the company's DRIVE PX hardware and software platform for autonomous vehicle development and deployment. Danny serves on the advisory boards of the Los Angeles Auto Show, the Connected Car Council and the NVIDIA Foundation, which focuses on computational solutions for cancer research. He holds a Bachelor of Science in electrical engineering and computer science from Princeton University and an MBA from the Haas School of Business at UC Berkeley.
Magnus NordinTechnical DirectorElectronic Arts
Magnus NordinTechnical DirectorElectronic Arts
Day 15:10-6:10pmDeep Learning for Game DevelopmentThe number of applications of deep neural networks has multiplied in the last couple of years. Neural nets has enabled significant breakthroughs in everything from computer vision, voice generation, voice recognition, translation, and self-driving cars. Neural nets will also be an powerful enabler for future game development. This presentation will give an overview of the potential of neural nets in game development, as well as provide an in depth look at how we can use neural nets combined with reinforcement learning for new types of game AI.
Speaker BioMagnus published his first game in 1983. Since then he has spent 25 years doing computer science and software engineering in a large number of projects and companies. He is currently working with deep learning and AI research in the gaming industry as the Technical Director of SEED, an EA R&D division.
Jeff SchneiderSr Engineering ManagerUber ATG
Jeff SchneiderSr Engineering ManagerUber ATG
Day 13:50-5:10pmSelf Driving Cars and AISelf driving cars have become one of the hottest areas in tech development today and are poised to transform our transportation systems. They are also one of the most complex technology developments ever undertaken and are simply not possible without extensive machine learning. In this talk, I will give a brief history of autonomous vehicle progress, and describe how machine learning and artificial intelligence solve various parts of the problem. I will finish with observations on how self driving cars will disrupt and transform the transportation industry, our cities, and our lives.
Speaker BioDr. Jeff Schneider is the engineering lead for machine learning at Uber's Advanced Technologies Center. He is currently on leave from Carnegie Mellon University where he is a research professor in the school of computer science. He has 20 years experience developing, publishing, and applying machine learning algorithms in government, science, and industry. He has over 100 publications and regularly gives talks and tutorials on the subject.
Tony HanCo-Founder and CTOJingChi.ai
Tony HanCo-Founder and CTOJingChi.ai
Day 13:50-5:10pmAutonomous Driving in the AI Era - A Multi-sensor Fusion based ApproachIn the era of AI, many deep learning based algorithms have been applied in autonomous driving. These algorithms help self-driving cars run safer and smarter. There are also three trends currently happening in car industry. The rapid technology progress, together with these three trends, make us believe the autonomous driving vehicles will be put in the market earlier than we previously thought. I will also introduce recent exciting progress made at JingChi, an autonomous driving startup founded in April 2017. More specifically, I will discuss our core technologies including perception, HD Map, prediction, planning, and simulation. We will start our trial operation in Anqing, a small city with 3 million population in China in 2018.
Speaker BioDr. Tony Han is the cofounder and CTO of JingChi.ai, the leading autonomous driving technology company in China. Previously, he was the Chief Scientist of Autonomous Driving Unit at Baidu, leading the perception, simulation, hardware, and sensing teams. He also does research in AI, more specifically, in Computer Vision, Machine Learning, and Speech Recognition. He had led the Mandarin Speech Recognition team in Baidu AI lab at Silicon Valley and was the original contributor of the deep learning based Mandarin speech recognition engine, DeepSpeech2, which is selected as one of ten breakthrough technologies in 2016 by MIT Technology Review. The speech recognition engine has been deployed to Baidu Map. Due to the innovation in DeepSpeech2 and autonomous driving, Baidu took the second place in the 50 smartest companies ranked by MIT Technology Review in 2016; Dr. Han led these two research efforts. Dr. Han was a tenured professor at the EECS department in the University of Missouri (MU). His research team was a joint winner of the action recognition task in the worldwide grand challenge PASCAL 2010. His research team together with NEC research won the second place in object detection in the worldwide Large Scale Visual Recognition Challenge 2013 (ImageNet Challenge 2013). The human detector developed in his group was ranked number two in the worldwide grand challenge PASCAL 2009 and PASCAL 2012. His research team together with UIUC joint team also won the first place in Facial Expression Recognition and Analysis Challenge (FERA), 2011. He serves frequently as a panelist for National Science Foundation and as an associate editor for Journal of Multimedia. He is the recipient of CSE fellowship.
Manohar PaluriManager of
Computer Vision
Facebook
Manohar PaluriManager of
Computer Vision
Facebook
Day 11:50-3:10pmUnderstanding VideoVideo is becoming ubiquitous on the web. From the capture and creation to consumption a lot of amazing things are happening. As folks working in AI this poses a new challenge and great opportunity for us. If we can make machines understand video the way humans do then we can unlock a long set of applications. But, to be able to get there we need to solve many challenging problems, some of which are obvious ones that the academic community is solving – Datasets, Action Recognition, Multi-Modal understanding, Temporal aggregation, Modeling appearance and motion, Compression etc. I would like to discuss these direction and also talk about more longer term directions on self-serve content understanding, large label embedding and video summarization and so on.
Speaker BioManohar Paluri is currently a Research Lead at Facebook AI research and manages the Computer Vision team in the Applied Machine Learning organization. He is passionate about Computer Vision and in the longer term goal of building systems that can perceive the way humans do. Throughout his career he spent considerable time looking at Computer Vision problems in Industry and Academia. He worked at renowned places like Google Research, IBM Watson Research Labs, Stanford Research Institute before helping co found Facebook AI Research directed by Dr. Yann Lecun. Mr. Paluri spent his formative years at IIIT Hyderabad where he finished his undergraduate studies with Honors in Computer Vision and joined Georgia Tech. to pursue his Ph.D. For over a decade he has been working on various problems related to Computer Vision and in general Perception and has made various contributions through his publications at CVPR, NIPS, ICCV, ECCV, ICLR, KDD, IROS, ACCV etc. He is passionate about building real world systems that are used by billions of people. Some of these systems are running at Facebook and already have tremendous impact on how people communicate using Facebook.
James ManyikaChairman and DirectorMcKinsey Global Institute
James ManyikaChairman and DirectorMcKinsey Global Institute
Day 13:10-3:30pmSizing up the promise of AIThis presentation will draw on new findings from the McKinsey Global Institute's ongoing research on the economic and business impact of AI. It will explore four key questions for AI today: who is investing and where, who is adopting AI and how, where can AI improve corporate performance, and what do business leaders need to know tomorrow morning.
Speaker BioJames Manyika is a senior partner at McKinsey & Company and chair and director of the McKinsey Global Institute (MGI), the firm’s business and economics research arm.
Lukasz KaiserSr. Research ScientistGoogle Brain
Lukasz KaiserSr. Research ScientistGoogle Brain
Day 11:30-1:50pmOne Model to Learn It AllDeep learning yields great results across many fields, from speech recognition, image classification, to translation. But for each problem, getting a deep model to work well involves research into the architecture and a long period of tuning. We present a single model that yields good results spanning multiple domains. This single model is trained concurrently on ImageNet, multiple translation tasks, image captioning, a speech recognition corpus, and an English parsing task. We achieved state-of-the-art performance while training much quicker and generating long coherent pieces, even on the scale of full Wikipedia articles. Our new architectures improve the ability to generate both text and images.
Speaker BioLukasz joined Google in 2013 and is currently a senior Research Scientist in the Google Brain Team in Mountain View, where he works on fundamental aspects of deep learning and natural language processing. He has co-designed state-of-the-art neural models for machine translation, parsing and other algorithmic and generative tasks and co-authored the TensorFlow system and the Tensor2Tensor library. Before joining Google, Lukasz was a tenured researcher at University Paris Diderot and worked on logic and automata theory. He received his PhD from RWTH Aachen University in 2008 and his MSc from the University of Wroclaw, Poland.
Dilek Hakkani-TurResearch ScientistGoogle
Dilek Hakkani-TurResearch ScientistGoogle
Day 110:00 - 11:30amConversational machines: Deep Learning for Goal-Oriented Dialogue SystemsIn this talk, I will present recent developments in Google Research for end-to-end goal-oriented dialogue systems, with components for language understanding, dialogue state tracking, policy, and language generation. The talk will summarize novel aspects of each component, and highlight novel approaches where dialogue is viewed as a collaborative game between a user and an agent: The user has a goal in mind and the agent has access to the data that user is interested in, and can perform actions in order to realize the user’s goal. The two engage in a conversation so that the agent can help the user find a way for task completion.
Speaker BioDilek Hakkani-Tür is a research scientist at Google. Prior to joining Google, she was a researcher at Microsoft Research(2010-2016), International Computer Science Institute (ICSI, 2006-2010) and AT&T Labs-Research (2001-2005). She received her BSc degree from Middle East Technical Univ, in 1994, and MSc and PhD degrees from Bilkent Univ., Department of Computer Engineering, in 1996 and 2000, respectively. Her research interests include natural language and speech processing, spoken dialogue systems, and machine learning for language processing. She has over 50 patents that were granted and co-authored more than 200 papers in natural language and speech processing. She is the recipient of three best paper awards for her work on active learning for dialogue systems, from IEEE Signal Processing Society, ISCA and EURASIP. She was an associate editor of IEEE Transactions on Audio, Speech and Language Processing (2005-2008), member of the IEEE Speech and Language Technical Committee (2009-2014), area editor for speech and language processing for Elsevier's Digital Signal Processing Journal and IEEE Signal Processing Letters (2011-2013). She is a fellow of IEEE and ISCA.
Yuandong TianResearch ScientistFacebook
Yuandong TianResearch ScientistFacebook
Day 15:10-6:10pmAI in Games: Achievements and ChallengesRecently, substantial progress of AI has been made in applications that require advanced pattern reading, including computer vision, speech recognition and natural language processing. However, it remains an open problem whether AI will make the same level of progress in tasks that require sophisticated reasoning, planning and decision making in complicated game environments similar to the real-world. In this talk, I present the state-of-the-art approaches to build such an AI, our recent contributions in terms of designing more effective algorithms and building extensive and fast general environments and platforms, as well as issues and challenges.
Speaker BioYuandong Tian is a Research Scientist in Facebook AI Research, working on deep reinforcement learning in games and theoretical analysis of deep non-convex models. He is the leader researcher and engineer for DarkForest (Facebook Computer Go project). Prior to that, he was a Software Engineer/Researcher in Google Self-driving Car team during 2013-2014. He received Ph.D in Robotics Institute, Carnegie Mellon University on 2013, Bachelor and Master degree of Computer Science in Shanghai Jiao Tong University. He is the recipient of 2013 ICCV Marr Prize Honorable Mentions for his work on global optimal solution to non-convex optimization in image alignment.
Kaijen HsiaoCTOMayfield Robotics
Kaijen HsiaoCTOMayfield Robotics
Day 111:30 - 12:30pmAdorable IntelligenceJoin us to hear how we created Kuri, the world’s most adorable home robot, and how cuteness and machine learning algorithms come together in creating a robot that people are excited to bring into their homes. Kuri uses embedded, deep-learning-based algorithms for face, pet, and person detection, as well as for place recognition (for mapping and localization). Such algorithms are crucial for enabling her endearing interactions with people, as well as her ability to be the family videographer and entertainer. Cute behaviors also enable Kuri to subtly and smoothly gather necessary data for training and inference, providing a significant benefit in improving core functionality for adorable home robots.
Speaker BioAs the CTO of a game-changing, consumer robotics company, I strive to open-up the new capabilities of robotics technology, with particular attention on shared autonomous teleoperation, imitation learning, tactile grasp adjustment, human-aware navigation and, robot mapping and localization. Across my career at MIT, Bosch, and Willow Garage, she has assembled, accelerated, and led robotics teams producing marketable breakthroughs that have sparked startup businesses in the US and Europe. A few more accolades: Robohub honored me as one of the “25 Women in Robotics You Need to Know about,” and I've also been honored as one of Silicon Valley Business Journal's "Women of Influence."
Dileep GeorgeCo-FounderVicarious
Dileep GeorgeCo-FounderVicarious
Day 111:30 - 12:30pmOpportunities and challenges for robot manipulationProgress in artificial intelligence is starting to give robots the ability to perceive their environment. While deep learning has created classification systems that exceed human abilities in certain domains, manipulation tasks require a richer understanding of the world than classification or detection. To this end, Vicarious focuses on building systems with greater data efficiency, flexibility of reasoning, and transfer of knowledge between tasks. Combining task relevant features from deep learning with active perception, handling of uncertainty, and closed loop planning can mitigate many of the commonly encountered errors in robot manipulation. I'll describe our progress, the opportunities we are exploring, and the problems that remain to be solved.
Speaker BioBefore cofounding Vicarious, Dileep was CTO of Numenta, an AI company he cofounded with Jeff Hawkins and Donna Dubinsky. Before Numenta, Dileep was a Research Fellow at the Redwood Neuroscience Institute. Dileep has authored 22 patents and several influential papers on the mathematics of brain circuits. Dileep's research on hierarchical models of the brain earned him a PhD in Electrical Engineering from Stanford University. He earned his MS in EE from Stanford and his BS from IIT in Bombay.
Zico KolterChief Data ScientistC3 IoT
Zico KolterChief Data ScientistC3 IoT
Day 212:00-12:30pmAI Startups in IoT
Speaker BioZico Kolter joined C3 IoT as a Chief Data Scientist in 2014. He is also a faculty member in the Computer Science Department at Carnegie Mellon University, where his research focuses on the integration of advanced modeling techniques and optimization into deep learning architectures. At C3 IoT, he has worked on numerous problems and applications of deep learning, including fraud detection from smart meters, failure prediction in distribution networks, inventory and supply chain optimization, and many others. He received his Ph.D. in Computer Science from Stanford University in 2010, and was a postdoctoral researcher at MIT from 2010-2012.
Frank ChenPartnerAndreessen Horowitz
Frank ChenPartnerAndreessen Horowitz
Day 29:00-9:30amStartups and AIIsn't AI going to be dominated by the big companies like Google and Amazon and Microsoft and Baidu? What can startups do to thrive in this ecosystem? What are investors looking for when they meet AI-powered startups? Should startups with AI inside think about their go-to-market process any differently from other startups? Frank Chen from Andreessen Horowitz will tackle these and other AI startup questions in this session.
Speaker BioFrank Chen is a partner at the venture capital firm Andreessen Horowitz and head of its deal and research team. Most recently, Frank was VP of Strategy for HP Software, where he helped navigate rapid enterprise adoption of virtualization technologies across servers, network and storage. Frank joined HP Software through its acquisition of Opsware, where he was VP of Products and User Experience for multiple data center automation products. Frank joined Opsware when it was Loudcloud, a managed services provider, and served as director of Product & Services. Before Loudcloud, Frank was VP of Product Development at Respond.com and director of Product Management at Netscape Communications. Earlier in his career, Frank held various engineering, product management and developer relations roles at GO Corporation, Oracle, Apple and IBM. Frank holds a B.S. in Symbolic Systems from Stanford University where he graduated with distinction and was elected to Phi Beta Kappa.
Dekang LinCo-Founder and CTONaturali
Dekang LinCo-Founder and CTONaturali
Day 29:30-10:30amAdding Conversation to GUIsMost AI assistants on mobile phones uses a conversational user interface (CUI) that mimics a chat app and translates user requests to API calls to backend services. I will present Conversational GUI (CGUI) which provides a thin layer of conversational interaction on top of existing GUI of mobile apps, by translating user requests into sequences of GUI actions such as clicks and swipes that user would have to perform by themselves. CUI avoids rebuilding existing user experiences in a chat window. More importantly, it makes it possible for end users, instead of software engineers, to create new skills by providing pairs of natural language expressions and a demonstration of the GUI actions.
Speaker BioDekang Lin is co-founder and CTO at Naturali, which builds an app assistant that aims to make every app be able to interact with users in their own language. Prior to co-founding Naturali, Dekang was a Senior Staff Research Scientist at Google, where he led a team of researchers and engineers to build a web-based question-answerer in Google search. Before joining Google, Dekang Lin was a full professor of Computer Science at University of Alberta. He authored 90 papers with over 12000 citations. He was elected as Fellow of Association for Computation Linguistics (ACL) in 2012 and served as the program co-chair and general chair for ACL-2002 and ACL-2011 respectively. Dekang Lin has a BSc degree from Tsinghua University and a PhD degree from University of Alberta, both in Computer Science.
Omar TawakolFounder and CEOVoicera
Omar TawakolFounder and CEOVoicera
Day 29:30-10:30amThe Rise Of Voice-Activated Assistants In The WorkplaceThe market is already demonstrating strong value in the home for voice-activated AI, but the work environment is yet to catch up. Omar will explain why voice-activated AI is the most important development to come to the workplace. He will pull from his experiences creating Eva, the first enterprise voice assistant focused on making meetings more actionable, and dive specifically into the challenges of ASR (Automatic Speech Recognition), NLP and neural networks in creating these kinds of voice-activated assistants. He will share how his team have overcome these challenges.
Speaker BioOmar Tawakol is the CEO of Workfit. Prior to Workfit, Omar was the founder and CEO of BlueKai which built the worlds largest consumer data marketplace and DMP. After Oracle acquired BlueKai in 2014, Omar served as the SVP & GM of the Oracle Data Cloud (ODC). The ODC powered 97 of the top 100 marketers as well as an ecosystem of AI applications. Based on this experience Omar decided to launch his next venture focused on AI that leverages data to help people become more productive. Omar earned an MS in CS from Stanford (BS, MIT) where he researched and published work on AI agents.
Amit GargPrincipalSamsung NEXT Ventures
Amit GargPrincipalSamsung NEXT Ventures
Speaker BioI am currently a Principal at Samsung NEXT Ventures, which houses our $150M early-stage investment arm, headquartered in the heart of Silicon Valley. I focus on software especially Internet / consumer technologies, investing $250K to $3M in seed, series A or B, and helping them partner with Samsung to grow and make a meaningful difference in people's lives.

My primary focus is Digital Health and secondary focus is Smart Cities. Some of my investments include UniKey (smart locks), nuTonomy (self-driving cars, sold for $450M), BioBeats (machine learning for human well-being), Glooko (diabetes management), Cohero Health (respiratory management), Terapede (low-dosage X-ray detection) and Figure1 (medical communication).

I have previously spent 10 years in Silicon Valley running my own startup, as a VC at Norwest Ventures, and doing product and analytics at Google. My academic training is BS in computer science and MS in biomedical informatics, both from Stanford, and MBA from Harvard. I speak natively 3 languages, live carbon-neutral, am a 70.3 Ironman finisher, and have built a hospital in rural India serving 100,000 people.
Ambarish KengheChief Product OfficerMyntra
Ambarish KengheChief Product OfficerMyntra
Day 211:00-12:00pmAI Startups in E-Commerce
Speaker BioAmbarish Kenghe (AK) is the Chief Product Officer at Myntra, India largest online fashion platform. Prior to Myntra AK spent 5 years at Google where he co-founded and launched Chromecast. He has also held leadership roles at Cisco and Bain & Company. AK has M. Tech. from IIT Kanpur, MS from Purdue University and MBA from University of California at Berkeley.
Andy PandharikarCEO and Co-founderCommerce.AI
Andy PandharikarCEO and Co-founderCommerce.AI
Speaker BioAndy Pandharikar is the Co-founder/CEO of Commerce.AI. Before that, Andy founded Fitiquette based in San Francisco, which got acquired by Myntra (acquired by Flipkart). Prior to that, Andy held various product and engineering positions at Cisco. He is also a member of SF Angels Group and has invested in a number of startups as an angel investor. He attended M.S. in Management Science and Engineering at Stanford University and an Executive Degree at Harvard Business School. Andy is an outdoor enthusiast, an ultramarathoner, and a certified lead rock climber.
Roland MemisevicChief Scientist and
Co-Founder
TwentyBN
Roland MemisevicChief Scientist and
Co-Founder
TwentyBN
Day 21:30-2:00pmCommon sense video understanding at TwentyBNDeep learning has evolved not linearly but through a series of step-functions: sudden unexpected outbreaks of capability, which fundamentally changed the envelope of what computers are able to do. At TwentyBN, we have created spatio-temporal video models and data infrastructure that allowed us to grow approximately one million labeled videos showing everyday common-sense scenes and situations - many of them extremely subtle. This allowed us to successfully train neural networks end-to-end on a wide range of action understanding tasks, that neither hand-engineering nor neural networks had appeared anywhere near solving just a few months ago. I will show how these recognition tasks now drive commercial value at TwentyBN, and how they drive our long-term AI agenda for learning common sense world knowledge through video.
Speaker BioRoland Memisevic received the PhD in Computer Science from the University of Toronto in 2008 doing research on neural networks. He subsequently held positions as a research scientist at PNYLab, Princeton, as a post-doc at the University of Toronto and at ETH Zurich, and as junior professor at the University of Frankfurt. In 2012 he joined the MILA deep learning group at the University of Montreal as an assistant professor. Since 2016 he has been Chief Scientist at the Canadian-German AI startup Twenty Billion Neurons, which he co-founded in 2015. Roland was named Fellow of the Canadian Institute for Advanced Research (CIFAR) in 2015. His research interests are in deep and recurrent neural networks, in particular, as applied to video understanding.
Ashutosh GargCEO and FounderVolkscience
Ashutosh GargCEO and FounderVolkscience
Day 22:00-2:30pmAI will take jobs away or will bring the right job to people50% of unemployment in US is because qualified people are not getting matched to right jobs. Many diversity candidates get filtered out because of inherent bias in our interview process. While most people are worried about AI taking over their job, AI can help people get the job they deserve. People's career trajectory is a fascinating time series data. Will show you how one can use AI on top of this data to predict what people will do next in their career. This can then be used by enterprises to optimize their hiring process.
Speaker BioAshu is a machine learning expert, with years of information-retrieval, machine-learning and search experience. Currently he is CEO of VolkScience which is transforming workforces by applying deep learning to recruiting. Previously he was CTO of BloomReach, a machine learning company focused on e-commerce. In the past, he worked as a research staff scientist at Google, where he led personalization and search efforts. He is a prolific publisher and inventor wit 30+ papers and 40+ patents. Ashu earned a bachelor’s in technology from the Indian Institute of Technology, Delhi, and a doctorate in electrical and computer engineering from the University of Illinois, Urbana-Champaign. Ashu has won numerous awards, including best thesis at IIT, an IBM Fellowship and outstanding researcher at the University of Illinois.
Ilya GelfenbeynProduct LeadGoogle Dialogflow
Ilya GelfenbeynProduct LeadGoogle Dialogflow
Day 22:30-3:00pmSuccessful Exits - Lessons from API.AIIlya, former Co-Founder and CEO of API.ai, will speak about his experience of starting API.AI and its exit through Google acquisition
Speaker BioIlya leads Dialogflow (formerly known as API.AI) product development and strategy as its product manager. Ilya co-founded API.AI and served as its CEO before Google acquired the company in September 2016. Prior to API.AI, Ilya co-founded and led several startups and also worked on research projects in natural language understanding and conversational UX areas, where he holds several patents. Ilya earned a bachelor's degree in mathematics from Novosibirsk State University in Russia and an MBA from the University of Brighton in the UK.
Moderators
T.M. RaviManaging Director
& Founder
The Hive
T.M. RaviManaging Director
& Founder
The Hive
Speaker BioT. M. Ravi is Managing Director and Co-founder of The Hive (www.hivedata.com). The Hive based in Palo Alto, CA is a venture fund and co-creation studio for Artificial Intelligence (AI) powered startups. The Hive engages with entrepreneurs and corporations to create companies focused on data and AI driven applications in the enterprise and different industry segments. The Hive also has a presence in India and Brazil. Ravi is a frequent speaker at conferences on the topics of AI, enterprise transformation and innovation. Ravi has a successful track record as a serial entrepreneur and operating executive. He has helped start over 25 startups including three where he was founder & CEO: Mimosa Systems (acquired by Iron Mountain), Peakstone Corporation, and Media Blitz (acquired By Cheyenne Software). Ravi was also CMO for Iron Mountain, VP of Marketing at Computer Associates (CA) and VP at Cheyenne Software. Ravi earned a MS and PhD from UCLA and a Bachelors of Technology from IIT, Kanpur, India. He is on the board of Montalvo Art Center based in Saratoga, CA.
Xuezhao LanFounding PartnerBasis Set Ventures
Xuezhao LanFounding PartnerBasis Set Ventures
Speaker BioDr. Lan Xuezhao is the founding partner of Basis Set Ventures, a $140M early stage venture focused on enterprise artificial intelligence. Her investments include Clara Labs (a pioneer in human-in-the-loop machine learning, where BSV led the Series A, joined by Sequoia, First Round and Slack fund), and Falkonry (a leader in factory automation). Lan built the Corporate Development Strategy team at Dropbox, one of the most active private tech companies with more than 30 acquisitions. Prior to Dropbox, she worked with McKinsey in New York and Shanghai, advising Fortune 500 companies on issues such as big data/AI strategy, growth strategy, market entry, pricing, and customer lifecycle management. Earlier in her career, Lan was an entrepreneur who built children's brain training games and an analyst with the United Nations Peace Keeping Operations. She studied human brain functions for her Ph.D. and received her MA in Statistics, both from the University of Michigan. She also did her post-doc at Harvard and has produced several peer-reviewed articles in top publications including Science.
Vijay ReddyInvestment ManagerIntel Capital
Vijay ReddyInvestment ManagerIntel Capital
Speaker BioVijay Reddy leads investments in Artificial Intelligence platforms, algorithms, infrastructure, as well as application of AI in computer vision, robotics, autonomous systems and industrial IOT. Vijay is a board observer and/or has responsibility for several portfolio companies including Matroid, MightyAI, AEYE, Inrix, Zumigo, AlienVault, Cognitive Scale etc. Previously Vijay sourced and lead equity investment in Nervana that was ultimately acquired by Intel to form the AI products group. Prior to joining Intel Capital, Vijay held leadership business development and product management positions in communications, software and semiconductor domains. He began his career as a researcher and an entrepreneur in wireless and software engineering. Vijay received his MBA from Chicago Booth and has a BS & MS in Electrical and Computer Engineering with top honors.
Kartik HosanagarProfessorThe Wharton School
Kartik HosanagarProfessorThe Wharton School
Speaker BioKartik Hosanagar is the John C. Hower Professor of Technology and Digital Business and a Professor of Marketing at The Wharton School of the University of Pennsylvania. Kartik’s research work focuses on the digital economy, in particular the impact of analytics and algorithms on consumers and society, Internet media, Internet marketing and e-commerce. Kartik has been recognized as one of the world’s top 40 business professors under 40. He received his PhD from Carnegie Mellon University and Bachelors from BITS.
Joanne ChenPartnerFoundation Capital
Joanne ChenPartnerFoundation Capital
Speaker BioJoanne is a partner at Foundation Capital, a 22 year old early-stage venture capital firm. Foundation Capital has invested in approximately 200 companies and taken 27 of them public, including Netflix, MobileIron, and Chegg. Joanne focuses on investing in data-driven companies that leverage AI with the goal of creating “self driving software.” She is a member or observer of the board on 5 B2B SaaS companies selling into marketing, sales, and HR, and has invested in Zengaming in the esports space. Joanne got her BS in Electrical Engineering & CS from Berkeley, and her MBA from Chicago Booth. She is also a closet gamer, and has clocked thousands of hours playing Soul Calibur on the xbox.
Schedule

In this three-day conference, we bring together leading scientists and practitioners who have deployed large-scale AI products. You will gain a front-row seat of the frontiers of AI and have opportunity to network with others who are enthusiastic about AI technology and products.

Day 1November 3, 2017
9:00 - 9:00am
Opening Remarks
9:10 - 10:00am
Keynote Speech
Keynote Speech
Andrew Ng : AI is the New Electricity [Video]
Similar to the rise of Electricity starting about 100 years ago, AI today is beginning to transform every major industry. This presentation will discuss how AI can transform your business, share major technology trends and thoughts on where your biggest future opportunities may lie, and describe best practices on incorporating AI, machine learning, and deep learning into your organization. These ideas will also be illustrated with some examples that haven't been presented elsewhere before.
Personal Assistants
A panel of industry experts and leading scientists on chatbot and natural language processing come together to discuss their work. What's the future of personal assistants?
Alex Acero : Deep Learning in Siri
Siri brought personal assistants to the mainstream after its introduction in 2011 in the iPhone. Deep learning is powering many components in Siri: trigger word detection, large vocabulary recognition, text-to-speech, machine translation, and natural language understanding. In this talk I will show a few examples of how deep learning is used in Siri.
Ruhi Sarikaya : Natural Language Interaction Challenges for Intelligent Personal Assistants
There are three fundamental challenges in interacting with the applications and services running behind intelligent personal assistants (IPAs): 1) application/service discovery, 2) learning what these applications can do, and 3) limited information flow into the apps/services. This same challanges manifests itself in all AI enabled systems. For example, users do not know what skills/apps exists that can handle their requests and they also do not know how to interact with those skills/apps in a natural way. Additionally, these systems have limited ability for contextual conversational understanding. We discuss these issues and explain what is needed to truly understand the user’s intent and serve the most relevant answer for the user’s request.
Dilek Hakkani-Tur : Conversational machines: Deep Learning for Goal-Oriented Dialogue Systems [Slides]
In this talk, I will present recent developments in Google Research for end-to-end goal-oriented dialogue systems, with components for language understanding, dialogue state tracking, policy, and language generation. The talk will summarize novel aspects of each component, and highlight novel approaches where dialogue is viewed as a collaborative game between a user and an agent: The user has a goal in mind and the agent has access to the data that user is interested in, and can perform actions in order to realize the user’s goal. The two engage in a conversation so that the agent can help the user find a way for task completion.
Robots
A panel of industry experts from companies that are actively developing robots come together to discuss their work. What is the status of robots? How do we build smart home robots?
Dileep George : Opportunities and challenges for robot manipulation
Progress in artificial intelligence is starting to give robots the ability to perceive their environment. While deep learning has created classification systems that exceed human abilities in certain domains, manipulation tasks require a richer understanding of the world than classification or detection. To this end, Vicarious focuses on building systems with greater data efficiency, flexibility of reasoning, and transfer of knowledge between tasks. Combining task relevant features from deep learning with active perception, handling of uncertainty, and closed loop planning can mitigate many of the commonly encountered errors in robot manipulation. I'll describe our progress, the opportunities we are exploring, and the problems that remain to be solved.
Kaijen Hsiao : Adorable Intelligence [Slides]
Join us to hear how we created Kuri, the world’s most adorable home robot, and how cuteness and machine learning algorithms come together in creating a robot that people are excited to bring into their homes. Kuri uses embedded, deep-learning-based algorithms for face, pet, and person detection, as well as for place recognition (for mapping and localization). Such algorithms are crucial for enabling her endearing interactions with people, as well as her ability to be the family videographer and entertainer. Cute behaviors also enable Kuri to subtly and smoothly gather necessary data for training and inference, providing a significant benefit in improving core functionality for adorable home robots.
12:30 - 1:30pm
Lunch
1:30-1:50pm
Deep Learning Breakthrough
Deep Learning Breakthrough
Leading scientists are presenting the current breakthrough in deep learning, ranging from multimodal learning and continuous prediction (generative model).
Lukasz Kaiser : One Model to Learn It All [Slides]
Deep learning yields great results across many fields, from speech recognition, image classification, to translation. But for each problem, getting a deep model to work well involves research into the architecture and a long period of tuning. We present a single model that yields good results spanning multiple domains. This single model is trained concurrently on ImageNet, multiple translation tasks, image captioning, a speech recognition corpus, and an English parsing task. We achieved state-of-the-art performance while training much quicker and generating long coherent pieces, even on the scale of full Wikipedia articles. Our new architectures improve the ability to generate both text and images.
Video Understanding
A panel of industry experts and leading scientists discuss the newest development in computer vision, particularly video analysis.
Rahul Sukthankar : Large-Scale Video Understanding: YouTube and Beyond [Slides]
This talk will present some recent advances in video understanding at Google. It will cover the technology behind progress in applications such as large-scale video annotation for YouTube, video summarization and Motion Stills, as well as our research in weakly-supervised learning, domain adaptation from YouTube to Google Photos and action recognition. I will also give my perspective on promising directions for future research in video.
Manohar Paluri : Understanding Video
Video is becoming ubiquitous on the web. From the capture and creation to consumption a lot of amazing things are happening. As folks working in AI this poses a new challenge and great opportunity for us. If we can make machines understand video the way humans do then we can unlock a long set of applications. But, to be able to get there we need to solve many challenging problems, some of which are obvious ones that the academic community is solving – Datasets, Action Recognition, Multi-Modal understanding, Temporal aggregation, Modeling appearance and motion, Compression etc. I would like to discuss these direction and also talk about more longer term directions on self-serve content understanding, large label embedding and video summarization and so on.
Xiaofeng Ren : The Quest for Video Understanding [Slides]
In this talk I will briefly discuss the ubiquitous needs of video and video understanding across Alibaba and the challenges that are being addressed and solved at iDST, Alibaba's AI R&D division. Examples include mobile shopping on Taobao, video search and recommendation on Youku and Tudou, and real-time systems for Cainiao Logistics and City Brain.
Industry talk
McKinsey report on the impact of AI.
James Manyika : Sizing up the promise of AI [Slides] [Video]
This presentation will draw on new findings from the McKinsey Global Institute's ongoing research on the economic and business impact of AI. It will explore four key questions for AI today: who is investing and where, who is adopting AI and how, where can AI improve corporate performance, and what do business leaders need to know tomorrow morning.
3:30-3:50pm
Coffee Break
Autonomous Driving
A panel of industry experts from companies that are actively developing autonomous cars come together to discuss their work.
Danny Shapiro : Accelerating the Race to AI Self-Driving Cars [Slides] [Video]
AI is transforming industries from consumer services to robotics. The transportation industry is next. As the industry moves from ADAS to the next-generation of self-driving technology, breakthroughs in computing are changing how we interact with vehicles, and enabling them to drive us. Deep learning is the game-changing technology behind all autonomous vehicle development. This session will showcase some of the latest deep learning systems from the data center to the vehicle being development to create safe and secure self-driving vehicles.
Jeff Schneider : Self Driving Cars and AI
Self driving cars have become one of the hottest areas in tech development today and are poised to transform our transportation systems. They are also one of the most complex technology developments ever undertaken and are simply not possible without extensive machine learning. In this talk, I will give a brief history of autonomous vehicle progress, and describe how machine learning and artificial intelligence solve various parts of the problem. I will finish with observations on how self driving cars will disrupt and transform the transportation industry, our cities, and our lives.
Tony Han : Autonomous Driving in the AI Era - A Multi-sensor Fusion based Approach [Video]
In the era of AI, many deep learning based algorithms have been applied in autonomous driving. These algorithms help self-driving cars run safer and smarter. There are also three trends currently happening in car industry. The rapid technology progress, together with these three trends, make us believe the autonomous driving vehicles will be put in the market earlier than we previously thought. I will also introduce recent exciting progress made at JingChi, an autonomous driving startup founded in April 2017. More specifically, I will discuss our core technologies including perception, HD Map, prediction, planning, and simulation. We will start our trial operation in Anqing, a small city with 3 million population in China in 2018.
Games
A panel of industry experts from companies that are actively developing game simulator and AI players come together to discuss their work.
Yuandong Tian : AI in Games: Achievements and Challenges [Slides] [Video]
Recently, substantial progress of AI has been made in applications that require advanced pattern reading, including computer vision, speech recognition and natural language processing. However, it remains an open problem whether AI will make the same level of progress in tasks that require sophisticated reasoning, planning and decision making in complicated game environments similar to the real-world. In this talk, I present the state-of-the-art approaches to build such an AI, our recent contributions in terms of designing more effective algorithms and building extensive and fast general environments and platforms, as well as issues and challenges.
Magnus Nordin : Deep Learning for Game Development [Slides] [Video]
The number of applications of deep neural networks has multiplied in the last couple of years. Neural nets has enabled significant breakthroughs in everything from computer vision, voice generation, voice recognition, translation, and self-driving cars. Neural nets will also be an powerful enabler for future game development. This presentation will give an overview of the potential of neural nets in game development, as well as provide an in depth look at how we can use neural nets combined with reinforcement learning for new types of game AI.
Dinner Banquet
Keynote speech
Xuedong Huang : Lessons Learned in Advancing Conversational Assistants
Speech and language technologies benefited tremendously from the latest progress in machine learning and knowledge engineering. I will share major lessons learned in advancing modern conversational assistants from Cortana to Customer Support Services.

Banquet and Networking
The evening consists of sponsor pitch, and presentation from keynote speaker. The banquet is a full course sit down dinner with wine, and a chance to interact with the speakers on an one to one basis. You will have the opportunity to meet and mingle with other guests, build connections, raise company profile, form potential business and research partnerships.
Day 2November 4, 2017
9:00-9:30am
Morning Keynote
Morning Keynote
Frank Chen : Startups and AI [Slides]
Isn't AI going to be dominated by the big companies like Google and Amazon and Microsoft and Baidu? What can startups do to thrive in this ecosystem? What are investors looking for when they meet AI-powered startups? Should startups with AI inside think about their go-to-market process any differently from other startups? Frank Chen from Andreessen Horowitz will tackle these and other AI startup questions in this session.
AI Startups in User Interface
Dekang Lin : Adding Conversation to GUIs [Slides]
Most AI assistants on mobile phones uses a conversational user interface (CUI) that mimics a chat app and translates user requests to API calls to backend services. I will present Conversational GUI (CGUI) which provides a thin layer of conversational interaction on top of existing GUI of mobile apps, by translating user requests into sequences of GUI actions such as clicks and swipes that user would have to perform by themselves. CUI avoids rebuilding existing user experiences in a chat window. More importantly, it makes it possible for end users, instead of software engineers, to create new skills by providing pairs of natural language expressions and a demonstration of the GUI actions.
Omar Tawakol : The Rise Of Voice-Activated Assistants In The Workplace [Slides]
The market is already demonstrating strong value in the home for voice-activated AI, but the work environment is yet to catch up. Omar will explain why voice-activated AI is the most important development to come to the workplace. He will pull from his experiences creating Eva, the first enterprise voice assistant focused on making meetings more actionable, and dive specifically into the challenges of ASR (Automatic Speech Recognition), NLP and neural networks in creating these kinds of voice-activated assistants. He will share how his team have overcome these challenges.
10:30-11:00am
Break
AI Startups in E-Commerce
Ambarish Kenghe : Training Machines to Design Fashion
Fashion is partly an Art and that makes it difficult to understand and create through machines. Myntra is using modern AI techniques to crack this problem and is now able to design many fashion article through machines with no designer intervention whatsoever. This is more than an experiment; we are selling these on our platform with great success. Myntra's Chief Product Office will talk about how they have achieved this and what is next. This can have broad applicability for design in other fields as well. He will also touch on AI techniques Myntra is leveraging to create significant impact of other parts of their business.
Andy Pandharikar : Deep Product Learning
There are over 5 billion unique products sold worldwide with over 30K new products introduced every month. We are using deep learning to understand product data and associated consumer feedback in the form of natural language. We call it “Deep Product Learning ®”. When AI starts to understand which product features work well, and which don’t, it can start influencing product design as well as marketing and merchandising. But it all starts with building reliable product matching and categorization systems. I will describe why these are non-trivial challenges and highlight other fundamental problems in this space that deep learning can finally solve. I will share our learning from building an applied AI startup and end the talk with our vision for the AI to disrupt commerce as the industry.
12:00-12:30pm
AI Startups in IoT
AI Startups in IoT
Zico Kolter : Deep Learning and the Digital Transformation [Slides]
We are in the midst of a digital transformation: a shift in virtually every industry based upon ubiquitous sensing (IoT), hugely scalable computing, vast amounts of data, and AI. At C3 IoT, we have developed a platform to enable and accelerate this digital transformation across multiple industry sectors. This talk will focus on the role of deep learning, which provides the "algorithmic glue" to unlock value in enterprise digital transformations. Using the C3 IoT Platform, data scientists can rapidly develop and deploy scalable deep learning services that run against all relevant enterprise data. I will highlight C3 IoT's work using deep learning to both detect fraud and predict and forecast failures, which enables better performance and universal analysis of data.
12:30-1:30pm
Lunch
1:30-2:00pm
AI Starups in Video Understanding
AI Starups in Video Understanding
Roland Memisevic : Common sense video understanding at TwentyBN [Slides]
Deep learning has evolved not linearly but through a series of step-functions: sudden unexpected outbreaks of capability, which fundamentally changed the envelope of what computers are able to do. At TwentyBN, we have created spatio-temporal video models and data infrastructure that allowed us to grow approximately one million labeled videos showing everyday common-sense scenes and situations - many of them extremely subtle. This allowed us to successfully train neural networks end-to-end on a wide range of action understanding tasks, that neither hand-engineering nor neural networks had appeared anywhere near solving just a few months ago. I will show how these recognition tasks now drive commercial value at TwentyBN, and how they drive our long-term AI agenda for learning common sense world knowledge through video.
2:00-2:30pm
AI Startups in HR
AI Startups in HR
Ashutosh Garg : AI will take jobs away or will bring the right job to people
50% of unemployment in US is because qualified people are not getting matched to right jobs. Many diversity candidates get filtered out because of inherent bias in our interview process. While most people are worried about AI taking over their job, AI can help people get the job they deserve. People's career trajectory is a fascinating time series data. Will show you how one can use AI on top of this data to predict what people will do next in their career. This can then be used by enterprises to optimize their hiring process.
2:30-3:00pm
Exits of AI Startups
Exits of AI Startups
Ilya Gelfenbeyn : Successful Exits - Lessons from API.AI [Slides]
Ilya, former Co-Founder and CEO of API.ai, will speak about his experience of starting API.AI and its exit through Google acquisition
3:00-3:30pm
Break
Startup Demo Session
Demos from 10 startups, each startup has 5 minutes for the demo and 5-minute comments from the VC panel.
Demo Startups:
  1. Redmarlin
    RRedMarlin protects brands from online infringement and abuse, monitors the internet, and detects fake websites in real-time.
    Presenter: Abhishek Dubey, Co-founder & CEO; Shashi Prakash, Co-founder & Chief Scientist
  2. TrueShelfTrueShelf
    TrueShelf is an AI powered adaptive learning platform that adaptively generates an unlimited number of problems.
    Presenter: Dr. Shiva Kintali, Founder & CEO
  3. RFNav
    RFNav is developing an autonomous vehicle navigation system to allow driving in all weather conditions.
    Presenter: Jim Schoenduve, Director of Strategy
  4. AFanta
    AFanta develops cutting-edge video processing technologies to create video social experiences and accelerate VR adoption.
    Presenter: Yang Qin, CEO
  5. 3DLook
    3DLOOK uses computer vision, deep learning and 3D matching algorithms to build SAIA, the first product for the retail industry, with high accuracy of human body measurements of up to 98%.
    Presenter: Vadim Rogovskiy, CEO & Co-founder
  6. intensivate
    Intensivate produces a 4U rackmount box that replaces 120 Intel based servers. Intensivate is the server of choice for CPU based AI applications.
    Presenter: Sean Halle, Founder and CEO
  7. Peritus
    Peritus is a virtual expert for AI-driven automation of support delivery and incident resolution for datacenters, telco networks and industrial operations.
    Presenter: Santhosh Srinivasan, Cofounder & VP, Engineering; Kamesh Raghavendra, Cofounder & VP, Product
  8. Cerebri
    CerebriAI looks at existing customer journey sequences as well as associated demographics to predict and produce desired sales outcomes.
    Presenter:Jean Belanger, CEO
  9. decisionEngine
    Decision Engines is an intelligent end to end business process automation platform that automates human touch points in business processes.
    Presenter: Sridhar Gunapu, Co-Founder
  10. Vizzario
    The Vizzario AI Platform combines vision science with AI to measure how humans view and interpret visual information, generating new methods to understand, optimize and personalize the end user experience.
    Presenter: Dr. Khizer Khaderi, CEO/Founder

VC Panelists:
  • Samir Kumar, Managing Director at Microsoft Ventures
  • Ashmeet Sidana, Managing Partner at Engineering Capital
  • Amit Garg, Principal at Samsung NEXT Ventures
Day 3November 5, 2017
8:30am-12:30pm
Training - Natural Language Processing
Training - Natural Language Processing
Natural Language Processing
Instructor: S K Reddy
Natural Language Processing (NLP) has made tremendous progress in processing text in the recent past. Question-answering, topic modeling, summarization, sentiment analyses, spam email detection, auto-response to emails, medical diagnosis, are few of the many problems that are being solved using NLP.
Objectives of this training session:
(i) Novices: Introduce NLP.
(ii) Intermediate: Enhance the insights into NLP and sub topics.
(iii) Experts: Sharpen the skills in NLP sub topics like summarization, QA and language translation.
Agenda:
(1) ML and NLP fundamentals.
(2) Neural networks, RNN, LSTMs, GRUs.
(3) Question answering, topic modeling, sentiment analysis, summarization and translation.
(4) NLP frameworks. Intro to Tensorflow, Keras, Caffe.
(5) Word embeddings, word2vec, glove, language modeling.
(6) Hands-on exercise on an NLP problem, download a dataset and implement the model.
(7) Define next steps to continue NLP learning.
1:30pm-5:30pm
Training - Deep Reinforcement Learning
Training - Deep Reinforcement Learning
Deep Reinforcement Learning by Facebook Team
Instructor: Yuandong Tian
A deep dive into deep reinforcement learning and get hands on experience on applying deep RL to games hosted by Facebook.
Agenda:
(1) The basics of reinfrocement learning.
(2) Q-learning
(3) Introduction to deep reinforcement learning
(4) Policy gradients.
(5) Actor-critic algorithms.
(6) Introduction of an extensive, lightweight and flexible platform for deep RL & designing AI agents ( http://github.com/facebookresearch/ELF)
SponsorsDiamond
Platinum
Gold
Knowledge Partner
Media Partner
Be Our sponsor
NOV 3-5, 2017
5001 Great Amercia PKWY,
Santa Clara, CA 95054
register
Suggested Hotel
Hyatt Regency Santa Clara
5101 Great America Pkwy, Santa Clara, CA 95054
Contact Us
940 Stewart Dr Sunnyvale, CA 94085
info@aifrontiers.com