What we are trying to achieve with AI analysis at Kisaco Research
July 8th, 2020
Defining what we mean by AI
Before we begin the blog topic, I believe it is vital to state how I define artificial intelligence (AI) because everyone has their own ideas which may not match mine, so at least let me state what I mean by it. I use the term AI broadly, as a label for the space, without prejudice to its state of progress in achieving its goals. I define the AI community’s current state of progress as being in the age of “machine intelligence”, by which I mean short of narrow AI (by maybe a decade or two) and some long way from general AI. Of course, this begs the question of how to define narrow and general AI. There are no standard definitions, but for me narrow AI’s most defining characteristic is rapid learning from a few examples, plus some other milestones, such as learned skills memory (our current AI models, once trained on a task, have limited transference to other tasks), and perhaps Judea Pearl’s advocacy of AI that understands cause and effect (Pearl is well known in the community for grounding breaking work on Bayesian networks and more recently on causality). General AI (also called artificial general intelligence) is more easily defined as achieving parity with human intelligence – easy to define but given our lack of knowledge as to how human intelligence works, a mountain to climb. And can we achieve general AI before we understand human intelligence? While you ponder that, and if you enjoy the topic of definitions, take a look at: Jose Hernandez-Orallo’s book: The Measure of All Minds (CUP, 2017).
Today it is all about deep learning
Lest the above makes you wonder if I believe we have made any progress, the answer is a resounding: yes, we have. In 1988 I started taking an interest in neural networks, a branch of AI that really took off with the invention of backpropagation around 1985-86, independently by several researchers. Backpropagation allowed hidden neuron layers, sandwiched between input nodes and output neurons, to be trained, and gave neural networks useful power to solve a host of tasks, such as in pattern recognition and prediction. Note: machine learning (ML) is a branch of AI; neural networks is a branch of ML; and deep learning is a branch of neural networks.
Around 1990-95 a British group of researchers in neural networks formed a society and launched a journal published by Springer: Neural Computing and Applications, the founding editor was Howard James and I was on the original editorial board – there’s a new crop running the journal today. In 1994 my book, Neural Network Time Series Forecasting of Financial Markets, was published by J Wiley. Alas, with the excitement there was also a lot of hype and when expectations of “intelligence” was found wanting the topic began to lose research funding – the second AI winter had set in. Part of the problem was lack of computing power to run large networks, and limited data.
Around 2010 the end of the second AI winter took place with the birth of deep learning, a re-branding of neural networks but also neural networks 2.0 that added a host of new and effective empirical rules to train large networks, that was transformed by the availability of general-purpose computing programming on graphics processing units (GPGPUs) – the first AI hardware accelerators. Nvidia can claim credit for that. GPGPUs provided supercomputing capabilities to any researcher in AI and with the availability of big data, thanks to the Internet, our current era of deep learning took hold. There are too many useful/practical real-world examples of deep learning applications to list here, on top of tremendous advances in less practical but equally impressive feats such as beating world Go champions (the advances in methodology of course have practical value). In conclusion, deep learning is here to stay, as part of the science and engineering toolbox.
The new era of AI chips
The success of Nvidia and its high-end GPUs (we don’t talk about GPGPUs anymore, but any Nvidia GPU with CUDA cores is a GPGPU) led to many chip designers asking whether a dedicated AI architecture might be better that a chip based on graphics processing. The result is over $10.5b of investment funding and the launching of around 80 startups wanting a slice of the AI accelerator market, not to mention 34 established vendors playing this market, worth some $10b annual spend today and growing.
There are several respected commentators tracking the semiconductor field also covering AI chips. What I noticed was that no one was comparing AI chips side by side in a holistic manner that went beyond performance specifications covered well by independent AI benchmarks such as MLPerf (see the blog on that topic).
With the Kisaco Leadership Chart (KLC) we created a new 3D analyst chart comparing vendor products side by side, examining technical features on the x-axis, vendor execution in the market on the y-axis and market impact in plotted circle size. Depending on their position in the chart, we rank the vendors as Leaders, Contenders, Innovators, and Emerging Players.
The first KLC we produced is also the first analyst comparison chart on the market for AI chips. The next blog discusses the challenging task of comparing AI chips holistically, comparing product and vendor approach.
Appendix
Further reading
MLPerf, MLCommons, and improving benchmarking, Meta Analysis: the KR Analysis blog, July 2020.
Kisaco Leadership Chart on AI Hardware Accelerators 2020-21 (part 1): Technology and Market Landscapes, KR301, July 2020.
Kisaco Leadership Chart on AI Hardware Accelerators 2020-21 (part 2): Data Centers and HPC, KR302, July 2020.
Kisaco Leadership Chart on AI Hardware Accelerators 2020-21 (part 3): Edge and automotive, KR303, July 2020.
Kisaco Leadership Chart on ML Lifecycle Management Platforms 2020-21, July 2020.
Author
Michael Azoff, Chief Analyst
Copyright notice and disclaimer
The contents of this product are protected by international copyright laws, database rights and other intellectual property rights. The owner of these rights is Kisaco Research Ltd. our affiliates or other third-party licensors. All product and company names and logos contained within or appearing on this product are the trademarks, service marks or trading names of their respective owners, including Kisaco Research Ltd. This product may not be copied, reproduced, distributed or transmitted in any form or by any means without the prior permission of Kisaco Research Ltd.
Whilst reasonable efforts have been made to ensure that the information and content of this product was correct as at the date of first publication, neither Kisaco Research Ltd. nor any person engaged or employed by Kisaco Research Ltd. accepts any liability for any errors, omissions or other inaccuracies. Readers should independently verify any facts and figures as no liability can be accepted in this regard - readers assume full responsibility and risk accordingly for their use of such information and content.
Any views and/or opinions expressed in this product by individual authors or contributors are their personal views and/or opinions and do not necessarily reflect the views and/or opinions of Kisaco Research Ltd.