Breakthroughs are so interesting and the reason predicting the future of tech is so hard. Text embedding and “Internet scale” training are likely the things that allowed this AI boom and the amazing initial results.
I think many people see AI (and other tech) moving linearly from the current point forward but any software developer knows this is rarely the case. And no one can predict the next breakthrough.
It doesn’t help the hype and confusion around ML/LLM/AGI. And because on the surface LLMs seem intelligent people misunderstand their capabilities (much like politicians). They certainly have fantastic uses just as they are now but a lot of people are overly optimistic (or pessimistic depending on your point of view) of our new “AI overlords”.
Personally, LLMs are absolutely amazing at supporting me in my professional writing. I don’t let it do my work but it helps me play around to find a better way to express some things like if I had a sparing writing partner.
Breakthroughs are so interesting and the reason predicting the future of tech is so hard. Text embedding and “Internet scale” training are likely the things that allowed this AI boom and the amazing initial results.
I think many people see AI (and other tech) moving linearly from the current point forward but any software developer knows this is rarely the case. And no one can predict the next breakthrough.
It doesn’t help the hype and confusion around ML/LLM/AGI. And because on the surface LLMs seem intelligent people misunderstand their capabilities (much like politicians). They certainly have fantastic uses just as they are now but a lot of people are overly optimistic (or pessimistic depending on your point of view) of our new “AI overlords”.
Personally, LLMs are absolutely amazing at supporting me in my professional writing. I don’t let it do my work but it helps me play around to find a better way to express some things like if I had a sparing writing partner.