If AI is only a “parrot” as you say, then why should there be worries about extinction from AI?
You should look closer who is making those claims that “AI” is an extinction threat to humanity. It isn’t researchers that look into ethics and safety (not to be confused with “AI safety” as part of “Alignment”). It is the people building the models and investors. Why are they building and investing in things that would kill us?
AI doomers try to 1. Make “AI”/LLMs appear way more powerful than they actually are. 2. Distract from actual threats and issues with LLMs/“AI”. Because they are societal, ethical, about copyright and how it is not a trustworthy system at all. Cause admitting to those makes it a really hard sell.
Well I don’t know what there USP is in world where OneDrive/Google Drive/iCloud exist. And there future plan is a focus on AI, so yeah, goodbye Dropbox is my guess