M5Stack LLM Module for Edge AI Applications

from blog Tao of Mac, | ↗ original
I think this both vindicates my year-long interest in running LLMs on industrial “edge” hardware and signals that it’s time to step back and re-assess how to address that space. 3.2 TOPS isn’t exactly stellar performance when compared to what you can (nominally) get out of a RK35xx’s built-in NPU and way below the target for things like Copilot...