Unchecked Energy Consumption – Using AI to Improve AI
İçindekiler
Growing rapidly, the artificial intelligence (AI) market is projected to reach $1.8 trillion by the end of this decade.
While AI mania started gaining early traction in 2021, it was only last year that AI really found its breakthrough. In fact, such was its growth that everything related to AI, from solutions to usage, started skyrocketing, pushing investment in generative AI through the roof.
When generative AI systems like ChatGPT demonstrated new capabilities, everyone wanted a piece of this rapidly growing pie. Also, most of this private investment is happening in the US.
Underpinning these popular tools are foundation models such as GPT-4 in the case of OpenAI’s ChatGPT. These big multipurpose models require massive datasets and vast resources for training. Foundational models serve as a starting point for developing machine learning (ML) models that power novel applications cost-effectively and more quickly.
Tech giant Google has released several foundation models; Imagen, Muse, and Parti are text-to-image models, MedLM for the healthcare industry, the coding model called Codey, and the universal speech model Chirp.
These models consume unprecedented amounts of memory, including much more memory to store and retrieve the real-world data they operate on. For instance, GPT-3 is trained on about 500 billion words and uses 175 billion parameters. This has resulted in AI’s energy demand soaring.
Over the past couple of years, the environmental impact of AI has been widely reported. Late last year, a peer-reviewed analysis tried to quantify this demand.
After discussing the enormous energy costs of cryptocurrency mining, Alex de Vries, a data scientist at the Netherlands’ central bank and a Ph.D. candidate at Vrije University Amsterdam, has turned his attention to the latest tech trend: AI adoption. According to his latest assessment, NVIDIA will ship 1.5 million AI server units per year by 2027. When running at full capacity, these servers are projected to consume at least 85.4 terawatt-hours of electricity annually.
De Vries estimates that AI could potentially be worse than Proof-of-Work (PoW) networks like Bitcoin. However, these are just estimates, with experts noting that these figures are not complete and are contingent.
Last December, Sasha Luccioni of AI firm Hugging Face and her colleagues from the firm and Carnegie Mellon University also ran tests on 88 different models. They ran the task 1,000 times and found that most tasks use a small amount of energy, like 0.047 kWh, to generate text. However, the figures were found to be much larger for image-generation models, which used an average of 2.907 kWh per 1,000 inferences. For context, they noted that an average smartphone uses 0.012 kWh to charge.
Meanwhile, a recent paper estimated that large language models use about 1,300 megawatt hours of electricity, which is equivalent to the power consumed by 130 homes in the US annually.
The International Energy Agency also noted in its report earlier this year that demand for AI and crypto will lead to considerably increased electricity usage by data centers in the near future. The increase is expected to be from 460 terawatt hours in 2022 to between 620 and 1,050 TWh in 2026.
This has caught the attention of the regulators, who are now warning about the rising cost of AI. According to Massachusetts Senator Edward Markey (D):
“The development of the next generation of A.I. tools cannot come at the expense of the health of our planet.”
This came after he, along with other senators and representatives, introduced a bill requiring the federal government to assess AI’s environmental footprint and develop a standardized system for reporting future impacts. In Europe, the AI Act has already been passed, which requires powerful foundation models to report their resource use, energy consumption, and other impacts.
Amidst all this, the International Organization for Standardization will issue criteria later this year for measuring material use, water consumption, and energy efficiency for “sustainable A.I.”
Making AI More Efficient
To be viable at a vast scale, AI models must become more energy efficient and capable of running on energy-constrained devices that use significantly less power than data centers.
These data centers require huge amounts of power to keep the computers running, which is predominantly sourced from fossil fuels. This causes significant CO2e emissions. To tackle this, researchers and organizations have been working on making AI more efficient.
One prominent firm that has made significant progress in finding a solution to this problem is the London-based code optimization specialist TurinTech. TurinTech is making strides through a blend of deep learning and evolutionary algorithms. This system continuously adapts an existing model based on new information instead of regenerating it from scratch.
According to Harvey Lewis of Ernst and Young UK, evolutionary or genetic algorithms and Bayesian statistical methods can make deep learning more efficient, and specialist hardware can reduce its cost.
Another suggested method is connecting data-driven AI with other scientific or human inputs about the application’s domain. Pushkar P. Apte, director of strategic initiatives at CITRIS, and Costas J. Spanos, director of CITRIS, wrote about four ways to achieve this:
- Synergizing AI with scientific laws.
- Augmenting data with expert human insights.
- Employing devices to explain how AI makes decisions.
- Using other models to predict behavior.
Most recently, startup EnCharge made an AI breakthrough that could dramatically improve the energy consumption of these AI models when performing predictions. The company utilized its DARPA funding to reduce memory usage by doing some of the work in analog memory circuitry, which can perform matrix-multiply accumulations in parallel at low energy instead of traditional transistors.
“That’s how you solve the data movement problem.”
– Naveen Verma, CEO of EnCharge AI and a professor at the Department of Electrical Engineering at Princeton
He further added that rather than communicating individual bits, the reduced result is communicated in the form of the accumulation of lots of parallel multiplications.
EnCharge AI has been able to process 150 trillion operations per second per watt. However, Analog computing is extremely difficult to achieve, and previous attempts haven’t been fruitful.
Meanwhile, research by tenure-track assistant professor Raghavendra Selvan from the UCPH’s Department of Computer Science last year explored different ways to lower the carbon footprint of ML. At a micro level, algorithms can be made faster and more efficient to reduce resource usage. This, he noted, could be done by looking into how to reduce the number of bits used to do the computations and how to reduce redundant computations.
He further suggested assessing the need for all the stored data. So, at the macro level, by looking at when and where the computations (many of which are not time-critical) are being done, non-peak hours can be chosen for training AI systems to reduce the costs of the training sessions and their carbon footprint.
Using AI to Make AI Better
Now, Selvan has created a benchmark for designing AI models that consume far less energy without affecting their performance. However, this requires using the amount of energy used and carbon footprint as a standard for designing and training these AI models.
For this, 429,000 of the AI subtype models were studied. These convolutional neural networks, which are used for language translation, face recognition, object detection, and medical image analysis, are estimated to require as much as 263,000 kWh of energy to simply train.
To draw a parallel, 263,000 kWh is about as much energy as the average Danish citizen consumes over a period of more than four decades. A computer would take a century to complete all this training.
This colossal energy usage has the industry working on making it climate-friendly; however, the development of energy-efficient AI models has yet to become a reality. According to Selvan, who’s looking into possibilities for reducing the carbon footprint of AI:
“Today, developers are narrowly focused on building AI models that are effective in terms of the accuracy of their results.”
He compared this behavior to a car, which is considered good just because it gets one to one’s destination quickly without considering its fuel usage. He further added:
“As a result, AI models are often inefficient in terms of energy consumption.”
His new study, done with CS student Pedram Bakhtiarifard, aims to change this by demonstrating that it is possible to limit a great deal of CO2e while keeping the precision of an AI model intact.
To achieve this, the UCPH researchers noted that we need an energy-efficient model right from the beginning. This means considering climate costs when designing and during the energy-intensive process of training AI models. Selvan said that this way, the carbon footprint can be reduced in “each phase of the model’s ‘life cycle,’” which includes both the model’s training and deployment.
So, the researchers calculated the energy it takes to train hundreds of thousands of these AI models. Interestingly, the UCPH researchers didn’t train the models but rather estimated using another AI model. This way, they were able to save a vast majority, 99%, of the energy it otherwise would have taken.
Now, based on their calculations, the team has presented a benchmark collection of AI models that perform at about the same level but use less energy to complete a task.
According to the study, adjusting models or using other types of models can save as much as 80% of energy during the training and application stages. As for performance, there has been little or no compromise (at a mere 1% or even less). These numbers are actually conservative, as per the researchers.
“Consider our results as a recipe book for the AI professionals. The recipes don’t just describe the performance of different algorithms, but how energy efficient they are.”
– Bakhtiarifard
He further stated that by simply exchanging one ingredient in the model’s design with another, we “can often achieve the same result.” This means practitioners don’t need to start training each model first; rather, they can just choose one based on both performance and energy consumption.
Given that several models are trained before finding the most suitable option for a particular task, making AI development “extremely energy-intensive,” Bakhtiarifard said, “it would be more climate-friendly to choose the right model from the outset,” and on top of that, choosing the one that doesn’t consume significant power during the training phase.
While in areas like self-driving cars and medicine, model precision is critical for safety, as such, we can’t compromise on performance there, the researchers noted, this shouldn’t dissuade us from trying to achieve high energy efficiency in other domains.
The study, according to them, shows that a better trade-off can be found with energy efficiency being a standard in AI model development, as is the case in many sectors. According to Selvan:
“AI has amazing potential. But if we are to ensure sustainable and responsible AI development, we need a more holistic approach that not only has model performance in mind, but also climate impact.”
The benchmark, named EC-NAS, is open-source and can be used by other scientists and companies to advance research in neural architecture search (NAS). The study said that utilizing multi-objective optimization algorithms strikes a balance between energy usage and accuracy. “With its diverse metrics, EC-NAS invites further research into developing energy-efficient and environmentally sustainable models,” the study stated.
AI-based Energy Solutions
Now, let’s take a look at the companies that are tackling the energy sector using technology and actively working on offering AI-based energy solutions:
#1. GE Renewable Energy
The company utilizes AI/ML technology developed in-house to accurately predict and streamline logistics costs for the wind turbine logistics process. This year, GE released Proficy for manufacturers to achieve sustainability while helping maximize profitability. It also deployed AI-powered CERius to boost reporting accuracy.
Earlier this month, General Electric split into three separate companies, focused on aviation, energy, and health care, and started trading on NYSE as separate entities. So, its energy wing is now called GE Vernova (GEV) and has a market cap of $36 billion as its shares trade at 131.75. In 2023, the company secured its biggest order to support a US wind project that will supply 2.4 GW to the SunZia project. Goldman Sachs projects the company to have an EBITDA of $4bln by 2026.
#2. Schneider Electric
The $34.2bln French-based company leverages AI to improve efficiency and productivity as well as address the challenge of climate change. Schneider Electric’s AI usage revolves around data visualization and engineering, optimization and simulation, and dependability modeling.
The company recorded €36 billion in revenues for the fiscal year 2023, which is an increase of 13%. Schneider Electric also reported a net income of €4 billion and a free cash flow of €4.6 billion.
Conclusion
AI is the tech revolution of this decade. Given that the integration of AI has been shown to bring costs down and increase revenue for companies while providing better efficiency gains for workers, it’s clearly more than just a buzzword. AI systems are actually outperforming humans on a range of tasks, though we are still better than them at complex cognitive tasks.
However, it comes with its own set of risks in terms of privacy, algorithmic biases, and, as we discussed above, the negative environmental impact. A global survey on attitudes towards AI also shows that people are nervous about this new technology, though the majority see it changing their daily lives in the coming years. The younger generation is more optimistic about AI.
As AI continues to become a big part of our lives, governments, scientists, and companies are coming together to address its risks. Regulators have already started targeting the industry, with over 30 countries having passed at least one AI-related law over the past seven years. As more technological advancement occurs, we’ll see AI become more efficient and transform our world.
Click here to learn all about investing in artificial intelligence.