Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Let's assume that AI is human level (it's not, but humour me)

OK

> Now something goes wrong. A customer isn't getting what they ordered. What do you do about it?

What do you do when something goes wrong with a human?

Do that.

End of story.



This is my point, you can't. You can't hold an AI accountable in the way you can with a human, whether that's accountable to a contract or accountable to the law, and the only ways in which you can hold an AI accountable involve escalating to a human, at which point we're back to where we are now without AI and requiring essentially the same software and business processes.


If it's human level, you can.

You train it, you replace it with a more suitable worker, whatever.

You've answered your own dilemma with the premise.


Also AI (as promised) will deliver a customized solution to an extent that for a human to even solve the problems in it will need a lot of context. This in itself makes it dependent on humans. Also you have to factor in knowledge redundancy, non-availability of your personnel. So you will always have to account for more people than AI promises.


You can reason with a human. You can’t with an AI.

And if you got rid of all your human tools because AI, how do you put a human back in the loop?


We assumed it was human level in this thought experiment. I don’t see why you wouldn’t be able to reason with a human level AGI


Are there any consequences for an AGI?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: