It takes a lot of energy for machines to learn - Why AI is so power-hungry?

 


This month, Google forced out a prominent AI ethics researcher after she voiced frustration with the company for making her withdraw a research paper. The paper pointed out the risks of language-processing artificial intelligence, the type used in Google Search and other text analysis products.

Among the risks is the large carbon footprint of developing this kind of AI technology. By some estimates, training an AI model generates as much carbon emissions as it takes to build and drive five cars over their lifetimes.

I am a researcher who studies and develops AI models, and I am all too familiar with the skyrocketing energy and financial costs of AI research. Why have AI models become so power hungry, and how are they different from traditional data center computation?

Today’s training is inefficient

Traditional data processing jobs done in data centers include video streaming, email and social media. AI is more computationally intensive because it needs to read through lots of data until it learns to understand it – that is, is trained.

This training is very inefficient compared to how people learn. Modern AI uses artificial neural networks, which are mathematical computations that mimic neurons in the human brain. The strength of connection of each neuron to its neighbor is a parameter of the network called weight. To learn how to understand language, the network starts with random weights and adjusts them until the output agrees with the correct answer.

Read Complete Article

Comments

Popular posts from this blog

Infinix Smart 2 review: 'Value for money' smartphone with tall 18:9 screen

Year in review: From OnePlus to Asus, best midrange flagship phones of 2019

OnePlus 8 review: Meaningful innovations elevate experience, justify price