@Duchess64
AI is obviously potentially very dangerous and we should be taking the same steps to control or contain it that we do with nuclear weapons. (Actually this is true of most modern technology, with the exception of a number of medical advances; it still astonishes me that the human response to the destruction of the Second World War wasn't near-universal Luddism!).
However, for the moment, let me defend humanity's honour by offering this wonderful article about the ineptitude of Google Translate - solid evidence of the inability of machines to think in any way comparable to the way human beings think:
https://www.theatlantic.com/technology/archive/2018/01/the-shallowness-of-google-translate/551570/
@teinosuke saidWhile I was joking earlier, this is a pretty incredible achievement.
@Duchess64
AI is obviously potentially very dangerous and we should be taking the same steps to control or contain it
I wonder if humans will ever to be able to create AI that can truly be considered sentient.
@vivify saidThe odd thing is, we might do but I don't think we could ever know for certain. I only "know" that other people are sentient on the basis that I project my own sentience onto them. If a machine was sufficiently sophisticated, how could we tell the difference between its being truly sentient and its merely imitating sentience?
While I was joking earlier, this is a pretty incredible achievement.
I wonder if humans will ever to be able to create AI that can truly be considered sentient.
The post that was quoted here has been removedArtificial Intelligence will never top the human capacity for creativity. It will remain forever mired in its logical foundations.
It will never feel. It will never comprehend empathy.
It may eventually mimic our organic abilities but will fall short in its attempt to solve problems from the abstract perspective.