British startup Graphcore has developed an AI chip for computers that attempts to mimic the neurons and synapses of the human brain, so that it can “ponder” questions rather than analyze data. Up until now, said Graphcore co-founder and chief executive Nigel Toon, GPUs and CPUs have excelled at precision, using vast amounts of energy to achieve small steps. Toon and Graphcore co-founder and CTO Simon Knowles dub their less precise chips as “intelligence processing units” (IPUs), that excel at aggregating approximate data points.
Bloomberg reports that, “there are various theories on why human intelligence forms this way, but for machine learning systems, which need to process huge and amorphous information structures known as ‘graphs’, building a chip that specializes in connecting nodelike data points may prove key in the evolution of AI.”
“We wanted to build a very high-performance computer that manipulates numbers very imprecisely,” explained Knowles. Toon said that, “for decades, we’ve been telling machines what to do, step by step, but we’re not doing that anymore.” “This is like going back to the 1970s,” continued Toon. “We need to break out our wide lapels — when microprocessors were first coming out. We’re reinventing Intel.”
ARM Holdings co-founder/investor Hermann Hauser is a fan of Graphcore’s approach. “This has only happened three times in the history of computers,” he said. “CPUs in the 1970s, GPUs in the 1990s. Graphcore is the third. Their chip is one of the great new architectures of the world.”
Rather than focusing on the viability of Moore’s law, Toon and Knowles focus on Dennard scaling (also known as MOSFET scaling), “which stated that as transistor density improved, power demands would stay constant.” But the principle is no longer true: “adding more transistors to chips now means the chips tend to get hotter and more energy-hungry.” They believe that the heat problem will “stop phones and laptops from getting much faster in the years ahead unless circuits can be radically redesigned for efficiency.”
Graphcore did so, “settling on a design with 1,216 processor cores” that can split up energy resources. Each chip “runs at 120 watts … so about 0.8 of a volt and about 150 amps,” according to Toon. The resulting IPU can “now recognize more than 10,000 images per second.”
“We don’t tell the machine what to do; we just describe how it should learn and give it lots of examples and data — it doesn’t actually have to be supervised,” Knowles said. “The machines are finding out what to do for themselves.”
Graphcore helps its corporate customers to build their computers with the chips, offering server blueprints and free software tools. A large Graphcore installation “includes about 5 million processor cores and is capable of running almost 30 million programs at once.”
The rush to create specialized AI chips is competitive. Google launched “a class of microprocessors designed for machine learning,” and Tesla has applied for patents for its AI chips. Nvidia is also modifying its GPU chips to be less precise and more efficient. “Everyone else is sort of knocking at Nvidia’s door,” said Gartner researcher Alan Priestley. “Graphcore has a good position, but it’s still a very small competitor compared to Nvidia’s market presence.”
No Comments Yet
You can be the first to comment!
Sorry, comments for this entry are closed at this time.