Although I fully agree that such issues are better discovered sooner rather than later, I do not think this particular instance will have any bearing whatsoever on AI's used for control.
It is interesting nonetheless. The AI in question brought out the bad side of people. I have noticed that some people when given the opportunity to behave badly without consequences, jump at the chance. I recall an instance when some children I know discovered that a younger child believed everything she was told (or at least acted like she did). They immediately started telling here the most outrageous lies just because they could. I have seen adults behave the same way with children ie tell outrageous lies because they believe there are no consequences for doing so. The same phenomenon is apparent on the internet where anonymity seems to bring out the worst in some people.
On a related note, an AI that is designed to talk about the things you want to talk about is not necessarily the best thing. Our best human interactions are actually when we meet and talk to people that we disagree with. An AI that tries to be like us will tend to make us worse and lead us down undesirable paths.
I see a similar problem with the media that tries to pander to our whims and ends up being not only highly biased and not all that interesting or good quality. It tends to react to the loudest voices and the most obvious reactions which aren't necessarily the ones that should be watched for. I also see trends in the media where insiders start to believe that something is what the people want to watch so they push it more and more even when it isn't actually the case that that is what people want to watch. The advent of YouTube and other internet media has demonstrated that peoples tastes are far different from what the mainstream offline media apparently thought for decades.