Intel has internally benchmarked its products with high performance chips, but the MLPerf tools will provide an objective source of testing.

Tech Companies Partner with Universities on Benchmarking

With the release of MLPerf, AI developers in universities and Silicon Valley will discover which computer systems perform best in machine learning systems, such as translation, according to a story in HPCwire, which tracks high performing computer systems.

Google, Baidu, Intel, AMD, and Harvard and Stanford universities collaborated on the performance tool, which the companies hope will speed AI adoption. The areas it benchmarks include image recognition, object detection, speech recognition, translation, recommendation, sentiment analysis and “reinforcement” for predicting moves in video games.

“AI is transforming multiple industries, but for it to reach its full potential, we still need faster hardware and software,” said computer pioneer Andrew Ng.

It may not sound like earthshaking news, but every advance in testing machine learning and neural networks helps developers aim higher. The developers of MLPerf want to:

◼︎ Accelerate progress in ML via fair and useful measurement
◼︎ Enable fair comparison of competing systems yet encourage innovation to improve the state-of-the-art of ML
◼︎ Keep benchmarking effort affordable so all can participate
◼︎ Serve both the commercial and research communities
◼︎ Enforce replicability to ensure reliable results

read more at HPCwire