Ah this is a different risk than I thought was being implied.
This is saying if a doctor relies on AI to assess results, they lose their skill in finding them by themselves.
Honestly this could go either way. Maybe it’s bad, but if machine learning can outperform doctors, then it could just be a “you won’t be carrying a calculator around with you your whole life” type situation.
ETA: there’s a book Noise: A flaw in human judgement, that details how whenever you have human judgement you have a wide range of results for the same thing, and generally this is bad. If machine learning is more consistent, the standard of care is likely to rise on average.
It already is happening:
https://www.thelancet.com/journals/langas/article/PIIS2468-1253(25)00133-5/abstract
Ah this is a different risk than I thought was being implied.
This is saying if a doctor relies on AI to assess results, they lose their skill in finding them by themselves.
Honestly this could go either way. Maybe it’s bad, but if machine learning can outperform doctors, then it could just be a “you won’t be carrying a calculator around with you your whole life” type situation.
ETA: there’s a book Noise: A flaw in human judgement, that details how whenever you have human judgement you have a wide range of results for the same thing, and generally this is bad. If machine learning is more consistent, the standard of care is likely to rise on average.
Machine Learning is not LLM.
It’s not but the linked paper I responded to doesn’t mention LLMs?
The thread is about ChatGPT, which is an LLM bot, hence the confusion?