On November 11 at the AI Frontiers Conference in San Jose Convention Center, California, Google Cloud Platform (GCP) will present full-day training on image understanding with TensorFlow.
On November 11 at the AI Frontiers Conference in San Jose Convention Center, California, Google Cloud Platform (GCP) will present full-day training on image understanding with TensorFlow.
Udacity opened up a three-month NLP nano-degree program this year. The AI Frontiers Conference has teamed up with Udacity to present a shorter version of the program delivered directly by members of Udacity. The training touches upon text processing, feature extraction, topic modeling, and NLP with deep learning.
I remember the first time I saw photoshop more than 10 years ago: My designer was trying to modify a picture for our website, to match the color of our website — Green. He clicked on the blue sweater of the woman in the picture (and then clicked on some color on the side), then magically that whole sweater turned into green while other parts of the pictures kept the same. I was awed.
Over 90 percent of the world’s creative professionals use Photoshop, an Adobe product. Over 12 million people subscribe to Adobe’s Creative Cloud, the suite of Photoshop, Premiere Pro, After Effects, etc. Each day Adobe receives hundreds of millions of highly-produced images and videos from all over the world. Leveraging such a massive volume of data using artificial intelligence can help Adobe better understand what their artists and designers customers really need.
AI has come to the game industry.
Last year, Electronic Arts established an R&D division called SEED. The team leverages AI to explore new technologies and creative opportunities that they can enable for future games. Recently they showcased their latest work with real-time ray-tracing and self-learning AI agents that can play Battlefield.
-by Junling Hu
The most exciting developments of AI in 2018 is AutoML. It automates machine learning process. In January this year, Google released AutoML Vision. Then in July Google launched AutoML for machine translation and natural language processing. Both packages have been used by companies such as Disney in practical applications.
Google’s AutoML is based on Neural Architecture Search (NAS), invented in the end of 2016 (and presented in ICLR 2017) by Quoc Le and his colleague at Google Brain. In this article I will review the historical context of AutoML and the essential ideas of NAS.
Last month Sony re-launched its AIBO robot dog, which took Mario Munich on a trip down memory lane: It seems not that long ago, in a research project he was involved, that he coded speech recognition and object detection into an AIBO. That was early 2000 before Sony discontinued AIBO in 2006.
Dressed in a well-tailored suit and a white shirt, Kai-fu Lee walked into his Beijing-headquartered office on April 25 where his Chinese venture capital firm Sinovation Ventures announced a new $500 million fund to back early-stage and growing tech companies in both China and U.S. Thereupon, Sinovation Ventures manages a total of $2 billion across six funds.
One day in December 2009, Sumit Gulwani met a businesswoman on a flight from a seminar to his home. After the woman knew Gulwani was both a Ph.D. in computer science and a Microsoft researcher, she asked one question, “is there a way to merge two columns in Excel, when one column has a first name, the other the last, so that a column has both first and last names?” Gulwani couldn’t offer an answer.
On the stage stood a young man, wearing a simple shirt. As he talked slowly and clearly, you could feel the intensity and profound clarity. All the AI jargons became simple, weaved together in one beautiful framework. People who know him would not be surprised as he is the one behind some of those jargons. But it is a surprise for many people that he is also leading a billon-dollar nonprofit organization, whose mission is preparing humanity for the unavoidable proliferation of AI. Ilya Sutskever, who founded and still leads OpenAI, wants to open up AI technology to everyone and ensure that it is safe for the humanity.
Li Deng’s journey with AI has spanned for more than 3 decades. After working as corporate researcher and university professor and built his career in speech research, Deng has taken a plunge into the financial world: He joined the $30 billion hedge fund Citadel as the Chief AI Officer in May 2017, leaving behind his job as a Chief Scientist of AI at Microsoft, where he headed the company’s AI school and founded the Deep Learning Technology Center. Also left behind was his affiliated professorship at the University of Washington for over 17 years.
Facebook has quietly grown to be a video platform. It has the largest user-generated video content outside YouTube. Each day, people view Facebook videos for 8 billion times and spend 100 million hours watching (with 20 percent of videos live streamed). Facebook users can upload videos, stream live videos on their pages, and have video chats with their friends. Recently, Facebook launched a new feature that lets people do video chat in Facebook Groups.
Pieter Abbeel is on a quest. He wants to make a robot that can cook, clean up the room, make the bed, take out the trash, and fold the laundry.
The UC Berkey Professor started this quest since he was a graduate student in Andrew Ng’s group at Stanford University back in 2002. At that time, there was no robot for him to play with. He started with self-driving cars and helicopters.
Personal drones capture our imagination. Imagine you have something up there looking out for you. It looks at the vast landscape, the top of buildings, and the sweeping view of oceans, and captures them for you. In the past five years, the consumer drone market has quietly grown. In 2017, over 2.8 million consumer drones have been sold worldwide. The next fact would surprise you (if you are not a drone hobbyist): More than 70 percent of them are made by one single company: DJI.
Slim, quiet, and wearing a pair of thick glasses, Quoc Le does not strike you as someone who is leading a revolution in the AI field.
In 2011, Le co-founded Google Brain, together with his Ph.D. advisor Andrew Ng, Google Fellow Jeff Dean and Google Researcher Greg Corrado. The goal was exploring deep learning in the context of Google’s gigantic data. Before that, Le has done some pioneering work at Stanford on unsupervised deep learning.
Over the past few months, Arm has scored a couple of important acquisitions: it acquired Stream Technologies, a pioneer in machine-to-machine communications, to enable connectivity management of every device; It spent $600 million to acquire the California-based data analytics firm Treasure Data to expand its IoT ecosystem.
Arm is betting its future on the Internet of Things and artificial intelligence. The UK semiconductor vendor envisions a plan of connecting a trillion IoT devices by 2035, deriving real business value from IoT data.
Language understanding has so far been the privilege of humans. That is why studying natural language processing (NLP) promises huge potential for approaching the holy grail of artificial general intelligence (A.G.I). Many researchers dive into the field of NLP — machine translation, question and answering, reading comprehension, natural conversations, and on and on.
Shining a spotlight on the latest research progress of language understanding, the Association for Computational Linguistics (ACL) conference this year honored Know What You Don’t Know: Unanswerable Questions for SQuAD as its best short paper. SQuAD, which stands for Stanford Question Answering Dataset, is recognized as the best reading comprehension dataset. It spawns some of the latest models achieving human-level accuracy in the task of question answering.
-by Junling Hu
A few years ago I attended a talk by Foxconn’s CTO. When he mentioned that Foxconn was the third largest robot manufacturer at that time, I was surprised. “In fact,” he added, “we have already built a lights-out factory.” He showed a video clip: In a factory, mobile robots moved around, robotic arms worked on components, and conveying belts were rolling smoothly. There was not a single person there. I was deeply amazed. Fast forward to today, Foxconn owns 6 lights-out factories, and more than 50,000 robots in their facilities.
Why would manufacturers like Foxconn, who is well known for its large campus of workers (more than 1 million), quest for automation? There are many reasons for such move.
Deep learning has allowed technologists to employ AI in numerous business and consumer applications, but researchers are working feverishly to master other techniques that will broaden AI’s reach. Check out the video below from McKinsey.com.
AI researchers largely agree that fears about machines infused with artificial intelligence becoming self-acting and overpowering humans are overblown. Check out the video below from McKinsey.com.
In the race of AI personal assistants, Microsoft and Amazon are integrating Alexa and Cortana to keep their lead on Google Home. Most recently, on Sep 25th Apple announced it dropped Bing in favor of Google search for Siri.
To learn how this game is going to play out join us to hear a dialogue between Amazon Alexa and Google Home. On Nov 3 at AI Frontiers Conference (aifrontiers.com), Director of Amazon Alexa, Ruhi Sarikaya, will join a panel with Dilek Hakkani-Tur, who is part of the brain behind Google Home.