ETV Bharat / bharat

Facebook AI model beats Google, runs 5 times faster on GPUs

A team from Facebook AI Research (FAIR) has developed a novel low-dimensional design space called 'RegNet' that outperforms traditional available models like from Google and runs five times faster on GPUs.

Facebook AI model beats Google, runs 5 times faster on GPUs
Facebook AI model beats Google, runs 5 times faster on GPUs
author img

By

Published : Apr 12, 2020, 3:41 PM IST

A team from Facebook AI Research (FAIR) has developed a novel low-dimensional design space called 'RegNet' that outperforms traditional available models like from Google and runs five times faster on GPUs.

RegNet produces simple, fast and versatile networks and in experiments, it outperformed Google's SOTA EfficientNet models, said the researchers in a paper titled 'Designing Network Design Spaces; published on pre-print repository ArXiv.

The researchers aimed for "interpretability and to discover general design principles that describe networks that are simple, work well, and generalize across settings".

The Facebook AI team conducted controlled comparisons with EfficientNet with no training-time enhancements and under the same training setup.

Introduced in 2019, Google's EfficientNet uses a combination of NAS and model scaling rules and represents the current SOTA. With comparable training settings and Flops, RegNet models outperformed EfficientNet models while being up to 5× faster on GPUs.

Rather than designing and developing individual networks, the team focused on designing actual network design spaces comprising huge and possibly infinite populations of model architectures.


Also Read: COVID-19: 'Happiness classes' to be conducted at homes of Delhi govt school children


Design space quality is analyzed using error empirical distribution function (EDF).

Analyzing the RegNet design space also provided researchers other unexpected insights into network design.

They noticed, for example, that the depth of the best models is stable across compute regimes with an optimal depth of 20 blocks (60 layers).

"While it is common to see modern mobile networks employ inverted bottlenecks, researchers noticed that using inverted bottlenecks degrades performance. The best models do not use either a bottleneck or an inverted bottleneck, said the paper.

Facebook AI research team recently developed a tool that tricks the facial recognition system to wrongly identify a person in a video.

The "de-identification" system, which also works in live videos, uses machine learning to change key facial features of a subject in a video.

FAIR is advancing the state-of-the-art in artificial intelligence through fundamental and applied research in open collaboration with the community.

The social networking giant created the Facebook AI Research (FAIR) group in 2014 to advance the state of the art of AI through open research for the benefit of all.

Since then, FAIR has grown into an international research organization with labs in Menlo Park, New York, Paris, Montreal, Tel Aviv, Seattle, Pittsburgh, and London.

A team from Facebook AI Research (FAIR) has developed a novel low-dimensional design space called 'RegNet' that outperforms traditional available models like from Google and runs five times faster on GPUs.

RegNet produces simple, fast and versatile networks and in experiments, it outperformed Google's SOTA EfficientNet models, said the researchers in a paper titled 'Designing Network Design Spaces; published on pre-print repository ArXiv.

The researchers aimed for "interpretability and to discover general design principles that describe networks that are simple, work well, and generalize across settings".

The Facebook AI team conducted controlled comparisons with EfficientNet with no training-time enhancements and under the same training setup.

Introduced in 2019, Google's EfficientNet uses a combination of NAS and model scaling rules and represents the current SOTA. With comparable training settings and Flops, RegNet models outperformed EfficientNet models while being up to 5× faster on GPUs.

Rather than designing and developing individual networks, the team focused on designing actual network design spaces comprising huge and possibly infinite populations of model architectures.


Also Read: COVID-19: 'Happiness classes' to be conducted at homes of Delhi govt school children


Design space quality is analyzed using error empirical distribution function (EDF).

Analyzing the RegNet design space also provided researchers other unexpected insights into network design.

They noticed, for example, that the depth of the best models is stable across compute regimes with an optimal depth of 20 blocks (60 layers).

"While it is common to see modern mobile networks employ inverted bottlenecks, researchers noticed that using inverted bottlenecks degrades performance. The best models do not use either a bottleneck or an inverted bottleneck, said the paper.

Facebook AI research team recently developed a tool that tricks the facial recognition system to wrongly identify a person in a video.

The "de-identification" system, which also works in live videos, uses machine learning to change key facial features of a subject in a video.

FAIR is advancing the state-of-the-art in artificial intelligence through fundamental and applied research in open collaboration with the community.

The social networking giant created the Facebook AI Research (FAIR) group in 2014 to advance the state of the art of AI through open research for the benefit of all.

Since then, FAIR has grown into an international research organization with labs in Menlo Park, New York, Paris, Montreal, Tel Aviv, Seattle, Pittsburgh, and London.

Also Read: Lockdown side effects: Anxiety levels up among employees as appraisals get delayed, job loss fears rise

(IANS)

ETV Bharat Logo

Copyright © 2024 Ushodaya Enterprises Pvt. Ltd., All Rights Reserved.