- This event has passed.
IEEE SSIT Lecture: Green AI: Measuring the Carbon Intensity of AI in Cloud Instances
November 10, 2022 @ 7:00 pm - 8:00 pm
IEEE Philadelphia SSIT Chapter and IEEE Philadelphia Section are cooperating to organise this SSIT Lecture as a joint Webinar.
IEEE and SSIT Members as well as non-IEEE Members are invited to Register and participate. IEEE Members should include their IEEE Membership Number when registering.
This joint meeting will take place online. Registered participants will be provided with the link prior to the event.
This meeting will take place at 7pm (EST) on 10 November ’22.
The computational cost for deep learning research has been doubling every few months, resulting in an estimated 300,000x increase from 2012 to 2018. In this talk I’ll discuss the implications of this dramatic increase, from the concentration of power in a small number of labs to recent scholarship calling for better estimates of the greenhouse gas impact of AI; AI practitioners today do not have easy or reliable access to measurements of emissions, precluding development of actionable tactics to reduce emissions. I will introduce a framework for measuring software carbon intensity by using location-based and time-specific marginal emissions data per energy unit, and provide some measurements of emissions from training a variety of models in Microsoft Azure’s cloud platform. Our results show that training a single model can release as much emissions as the average US home does in a year. I will also discuss how the application the AI is trained on can play an even larger role in climate change than the emissions from training.
Jesse Dodge is a Research Scientist at the Allen Institute for AI, on the AllenNLP team, working on natural language processing and machine learning. He is interested in the science of AI, and he works on reproducibility and efficiency in AI research. His research has highlighted the growing computational cost of AI systems, including the environmental impact of AI and inequality in the research community. He has worked extensively on improving transparency in AI research, including open sourcing and documenting datasets, data governance, and measuring bias in data. He has also worked on developing efficient methods, including model compression and improving efficiency of training large language models. His PhD is from the Language Technologies Institute in the School of Computer Science at Carnegie Mellon University. His research has won awards including a Best Student Paper at NAACL 2015 and a ten-year Test of Time award at ACL 2022, and is regularly covered by the press, including by outlets like The New York Times, Nature, MIT Tech Review, Wired, and others.