At one time I was the master of the technology universe. With the title of Managing Director of Technology Research and Chief Market Strategist, my knowledge level was impressive.
For example I knew everything about the difference between dial up Internet speed and DSL. I understood the speed of DSL was dependent on the users proximity to amplifiers located at the central station. The puzzling question of the day was how important Bluetooth was going to be?
No matter how much effort goes into keeping up with technology, it changes too fast. It is possible to spend full time and still not know all there is. Fortunately, I am not alone; this is a common lament pretty much everywhere.
Case in point is semiconductor giant Intel. Recently a major Wall Street firm lowered its investment rating from average to underperform. That is as close as you can get to a sell recommendation. Intel has been a huge investment payday over the past 30 years so what is the reasoning behind the change.
It all has to do with deep neural networking. As a key part of Artificial Intelligence, DNN requires the type of huge computing power that makes Donald Trumps version of huge look modest.
Opinion has it that Intel rival Nvidia has the edge in emerging parallel workloads like deep neural networking. Wow, now that is a real tectonic shift in the tech universe.
I must admit in all honesty I can’t tell the difference between a deep neural network and a shallow neural network but I plan to take the next rainy weekend and brush up on my reading.
The Tensor Processing Unit: This Is Bigger Than Huge
Deep within Google with the help of a retired UC Berkley professor an effort is far along to create a newer faster chip. By itself, this is hardly novel. Everybody in Silicon Valley has run with that idea in the past.
The development team was motivated by a $6-$10 billion incentive. Based on Google’s conservative estimates of user ship of its machine learning technology using either Intel or Nvidia chips, Google would have to double the capacity of its data centers. The cost of that would truly be a bigger than huge investment.
In the world of computer chips, everyone makes claims of higher speeds and lower power consumption so it is important to be skeptical when such claims are offered.
Faster Than A Speeding Bullet
Ok, so here are the claims: TPU runs 15-30 times faster than anything produced by Intel or Nvidia while being 30-80 times more efficient. That is just nasty fast. If true this could cause problems for both Intel and Nvidia.
Instead of investing in servers, Google parent Alphabet may be tempted to build its own semiconductor foundries; not exactly a small cost item itself. The other, and more efficient approach would be the Qualcomm model and license the technology and let others spend the big bucks.
Remember Google is not the only company chasing the gold in AI. Virtually every company will have their own development interest and the FAANG companies are only part of the list.
Under the old standard Moore’s Law, chip speeds double every 18 months. That law sounds about as antiquated as the dial up modem when the prospects of a possible 30 fold increase in the not to distant future.
No question that AI is here and huge. Who knows some day AI may even be able to find a way to reset a password without needing to be told the name of my 3rd grade school teacher . Now that is technology doing work for the good of mankind.