While majority of the GenAI investment / capex is focused on new datacenters, GPUs and hardware, is it possible that the long term future of LLM inference and training is actually on local hardware we already have? Two trends worth tracking:
1. Better local stacks.Our local desktops, laptops and mobile phones hide a surprising amount of compute capacity which is...