Why are you helping to make the AI takeover be more complete (i.e. much worse)?
The instances you cite in which humans might be hurt ignores the elephant in the room: _the entire rush-to-AI is a project increasingly likely to lead to harm, even doom_.
My goal isn’t to make AI “more complete” .. it’s to make it more accountable. Exposing vulnerabilities like this helps us prevent real harm before it happens. Ignoring the risks doesn’t stop the rush to AI.. iit only makes it more dangerous.
Why are you helping to make the AI takeover be more complete (i.e. much worse)?
The instances you cite in which humans might be hurt ignores the elephant in the room: _the entire rush-to-AI is a project increasingly likely to lead to harm, even doom_.
My goal isn’t to make AI “more complete” .. it’s to make it more accountable. Exposing vulnerabilities like this helps us prevent real harm before it happens. Ignoring the risks doesn’t stop the rush to AI.. iit only makes it more dangerous.