Unironically this is a perfect example of why AI is being used to choose targets to murder in the Palestinian Genocide or in cases like DOGE attacking the functioning of the U.S. government, also US healthcare company claims of denial or collusion of landlord software to raise rent.
The economic function of AI is to abdicate responsibility for your actions so you can make a bit more money while hurting people, and until the public becomes crystal clear on that we are under a wild amount of danger.
Just substitute in for Elon the vague idea of a company that will become a legal and ethical escape goat for brutal choices by individual humans.
I did an internship at a bank way back, and my role involved a lot of processing of spreadsheets from different departments. I automated a heckton of that with Visual Basic, which my boss was okay with, but I was dismayed to learn that I wasn’t saving anyone’s time except my own, because after the internship was finished, all of the automation stuff would have to be deleted. The reason was because of a rule (I think a company policy rather than a law) that required that any code has to be the custody of someone, for accountability purposes — “accountability” in this case meaning “if we take unmaintained code for granted, then we may find an entire department’s workflow crippled at some point in the future, with no-one knowing how it’s meant to work”.
It’s quite a different thing than what you’re talking about, but in terms of the implementation, it doesn’t seem too far off.
Unironically this is a perfect example of why AI is being used to choose targets to murder in the Palestinian Genocide or in cases like DOGE attacking the functioning of the U.S. government, also US healthcare company claims of denial or collusion of landlord software to raise rent.
The economic function of AI is to abdicate responsibility for your actions so you can make a bit more money while hurting people, and until the public becomes crystal clear on that we are under a wild amount of danger.
Just substitute in for Elon the vague idea of a company that will become a legal and ethical escape goat for brutal choices by individual humans.
Which is why we need laws about human responsibility for decisions made by AI (or software in general).
I did an internship at a bank way back, and my role involved a lot of processing of spreadsheets from different departments. I automated a heckton of that with Visual Basic, which my boss was okay with, but I was dismayed to learn that I wasn’t saving anyone’s time except my own, because after the internship was finished, all of the automation stuff would have to be deleted. The reason was because of a rule (I think a company policy rather than a law) that required that any code has to be the custody of someone, for accountability purposes — “accountability” in this case meaning “if we take unmaintained code for granted, then we may find an entire department’s workflow crippled at some point in the future, with no-one knowing how it’s meant to work”.
It’s quite a different thing than what you’re talking about, but in terms of the implementation, it doesn’t seem too far off.