Baidu researchers hope processor vendors and data centre operators will contribute to expanding and running chips on the benchmark.
The researchers behind Baidu's DeepBench look forward to having other processor vendors [other than just Intel and Nvidia] and data centre operators contribute to expanding and running chips on their benchmark.
"I'd be personally interested in results on AMD GPUs and ASICs from start-ups with custom hardware for whom it may be difficult to run full models — this might be an easier way to for them to get their capabilities out there," Diamos said, noting the lab currently uses systems with eight Nvidia TitanX processors to run its speech recognition models.
DeepBench tests low-level hardware libraries, not higher level AI frameworks the data centres create such as Baidu's PaddlePaddle and Google's TensorFlow.
"At the framework level, there's a huge amount of difference in the models, and different models for different apps, but below the frameworks they use a few common operations," said Sharan Narang a software engineer in Baidu's AI lab. "The hope is we can find the core common operations that are more actionable for hardware makers," he said.
The researchers are considering whether they need a benchmark for inferencing, the separate job of using the models to find patterns in data. Today Baidu and Microsoft use FPGAs on servers to accelerate that less compute-intensive work.
Whether competing data centre operators collaborate on DeepBench remains to be seen.
To date, Google has made its TensorFlow framework open source and created an ASIC to accelerate inferencing jobs on its servers. For its part, Facebook released an open source server using multiple Nvidia GPUs for training neural networks.
A handful of start-ups are working on processors optimised for training neural networks including Wave Computing which presents its work at an event this week and Nervana which Intel recently acquired.
DeepBench is available online along with first results from Intel and Nvidia processors running it.
This article was first published on EE Times.