Originally posted by googlefudgeThe article falsely says that people cannot cooperate with or feel altruistic towards a computer.
Never attribute to altruism that which can be adequately explained by incompetence?
http://arstechnica.co.uk/science/2016/01/humans-arent-as-cooperative-as-we-thought-but-make-up-for-it-via-stupidity/
Originally posted by googlefudgeThe difficulty with these games is that what they actually test (assuming the participants understand the system) is trust. In their game one puts money into a pot which is then increased and the money shared between all the players. The risk is that the other players put in no money and the return is less than what one put in. Whether one is altruistic or selfish the best scenario is everyone putting all their money in at every cycle, so the question is whether one trusts the other participants to know that and cooperate or not. So what a player needs to do is find a way of signalling their willingness to cooperate without exposing themselves to too much risk. The problem with their game is probably that there is no penalty for not taking a risk. One needs a tax so that if everyone fails to cooperate then all the players lose money.
Never attribute to altruism that which can be adequately explained by incompetence?
http://arstechnica.co.uk/science/2016/01/humans-arent-as-cooperative-as-we-thought-but-make-up-for-it-via-stupidity/
I'm wary of drawing conclusions about human nature from these games, how one sets up the game has to big an influence on the strategy the players adopt. What is more it's a game and the players will treat it as such. In "real life" they might behave quite differently. So I think it is quite easy to beg the question. For example, by observing chess players on this site one could come to the conclusion that humans are warlike and ruthless, but often quite stupid and lose all their pieces...