ecoevo.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Dedicated to Ecology and Evolution. We welcome academics, students, industry scientists, folks from other fields with links to E&E, scientific societies, and nature enthusiasts in general.

Administered by:

Server stats:

574
active users

Has anyone got a good source of energy costs of training and running an object detection model (eg YOLO) vs LLM/image generator AI? Getting some pushback at work over using AI to count gulls in drone images because "AI uses hideous amounts of energy"

@sarahdalgulls @concretedog You'lll probably have to start from scratch trying to model the energy consumption of the data centres running the AI models - which would take far longer than getting a Raspberry Pi and rolling your own AI image detection using the new AI hat. raspberrypi.com/products/ai-ha

In a previous life I was involved with cloud data centres/hosting and calculating energy consumption is complicated, unless you can find an existing model, but AI and Bitcoin use a LOT of energy.

Raspberry PiBuy a Raspberry Pi AI HAT+ – Raspberry PiAn add-on board with a built-in Hailo AI accelerator for Raspberry Pi 5.
Sarah Dal

@roger_w_ @concretedog @d40cht honestly, was not looking for anything more complicated than being able to say that the popular LLMs and generative image creation use a lot more energy than us just training a YOLO image detection model on a dataset of a few thousand images.

But is that right?

@sarahdalgulls @roger_w_ @concretedog The YOLO models have 10s of millions of parameters. The largest LLMs have 100s of billions of parameters. The inference cost is (somewhat simplifying) proportional to the number of parameters. It also partly depends on the efficiency of the hardware that the models are run on - but at a very conservative estimate I think you could say your YOLO models are at least 100-1000x more energy efficient.

@d40cht @roger_w_ @concretedog thanks for this - this is really helpful