Originally posted by Pawnokeyhole
*sighs*
Note: The below differs slightly from the version.
Okay, suppose I make beings--robots if you like, something else if you don't--with roughly equally balanced natural tendencies to be nasty or nice. Some people might think this approximates humans.
I leave these being to interact with humans in significant ways.
But they also have s other reasons that eliminate that liability. But, all else equal, He would be liable.
This is the point I voiced from the beginning as to the disingenuity -- I hope it is merely subjective bias -- of the robot analogy:
How can a creator be liable for the harm his creation does to
itself and of
its own choice?! Allow me to use an analogy of my own:
I set up a "virtual environment" (VE) with a series of artificial intelligence (AI) nodes having distinct identities and personalities. The intention is for me to interact with the virtual environment to "channel" the nodes to allow for wholesome interaction in the creation of new "super-code". This obviously allows for the possible risk of creating a cyber-virus, or perhaps a virtual masterpiece of unthinkable magnitude. To further complicate the issue, the AI nodes firewall my access into their VE, turn on each other and wreck their environment.
Would I still be liable for damage done to my own "virtual" creation? To whom would I be liable?
Edit: Try to understand my point of a
closed, created system. Your argument of "unleashing them on the public" has no valid application to reality, since by definition, there is no "uncreated" (i.e. non-robot beings) within the created universe.