Has anyone got a good source of energy costs of training and running an object detection model (eg YOLO) vs LLM/image generator AI? Getting some pushback at work over using AI to count gulls in drone images because "AI uses hideous amounts of energy"
@sarahdalgulls @concretedog You'lll probably have to start from scratch trying to model the energy consumption of the data centres running the AI models - which would take far longer than getting a Raspberry Pi and rolling your own AI image detection using the new AI hat. https://www.raspberrypi.com/products/ai-hat/
In a previous life I was involved with cloud data centres/hosting and calculating energy consumption is complicated, unless you can find an existing model, but AI and Bitcoin use a LOT of energy.
@roger_w_ @concretedog @d40cht honestly, was not looking for anything more complicated than being able to say that the popular LLMs and generative image creation use a lot more energy than us just training a YOLO image detection model on a dataset of a few thousand images.
But is that right?
@roger_w_ @concretedog @d40cht for the counting I have been using a QGIS plugin with an ONNX model I created in YOLO https://plugins.qgis.org/plugins/deepness/
@sarahdalgulls @roger_w_ @concretedog The YOLO models have 10s of millions of parameters. The largest LLMs have 100s of billions of parameters. The inference cost is (somewhat simplifying) proportional to the number of parameters. It also partly depends on the efficiency of the hardware that the models are run on - but at a very conservative estimate I think you could say your YOLO models are at least 100-1000x more energy efficient.
@d40cht @roger_w_ @concretedog thanks for this - this is really helpful