While this is a reasonable take, the tensor chips are supposedly focused on AI (which would make sense given their push into the AI space for phone tools like spam, photo/video editing, assistant, etc.) and this refresh builds upon AI stuff they rolled out to previous gen phones. I doubt any of it is so cpu intensive that whatever AI they’ve created in a few years wont also run on the older phone, it just might not be as snappy.
I have a different impression about new AI features backporting plans, but we will see.
My point is that ai targeting HW can potentially drive the next smartphones evolution, which is slowed down currently.
Training AI models takes a lot of development on the software side, and is computationally intense on the hardware side. Loading a shitload of data into the process, and letting the training algorithms dig down on how to value each of billions or even trillions of parameters is going to take a lot of storage space, memory, and actual computation through ASICs dedicated to that task.
Using pre-trained models, though, is a less computationally intensive task. Once the parameters are defined on that huge training set, that model can be applied by software that just takes the parameters already defined in training and applies it to smaller data sets.
So I would expect the AI/ML chips in actual phones would continue to benefit from AI development, including models developed many chip generations later.
The thing is more complicated than than. Moreover, there is a wish/needs to train/fine-tune models locally. This is not comparable to initial training of chatGPT like models, but still require some power. Juts today I read that some pixel 8 video improvement features will not be ported to pixel 7 because they need tensor 3 power.
While this is a reasonable take, the tensor chips are supposedly focused on AI (which would make sense given their push into the AI space for phone tools like spam, photo/video editing, assistant, etc.) and this refresh builds upon AI stuff they rolled out to previous gen phones. I doubt any of it is so cpu intensive that whatever AI they’ve created in a few years wont also run on the older phone, it just might not be as snappy.
I have a different impression about new AI features backporting plans, but we will see. My point is that ai targeting HW can potentially drive the next smartphones evolution, which is slowed down currently.
Training AI models takes a lot of development on the software side, and is computationally intense on the hardware side. Loading a shitload of data into the process, and letting the training algorithms dig down on how to value each of billions or even trillions of parameters is going to take a lot of storage space, memory, and actual computation through ASICs dedicated to that task.
Using pre-trained models, though, is a less computationally intensive task. Once the parameters are defined on that huge training set, that model can be applied by software that just takes the parameters already defined in training and applies it to smaller data sets.
So I would expect the AI/ML chips in actual phones would continue to benefit from AI development, including models developed many chip generations later.
The thing is more complicated than than. Moreover, there is a wish/needs to train/fine-tune models locally. This is not comparable to initial training of chatGPT like models, but still require some power. Juts today I read that some pixel 8 video improvement features will not be ported to pixel 7 because they need tensor 3 power.