When Does 'Cruelty' Apply To Robots?

What, exactly, is cruelty? How can we tell when a robot is being subjected it? Webster's defines cruelty as: "inhuman treatment". And "cruel" is defined as: "disposed to inflict pain or suffering" and also "causing or conducive to injury, grief, or pain".

Obviously any discussion of cruelty as it applies to robots must begin from an understanding of what "pain and suffering" might be to a robot.

In the opinion of the ASPCR, once a robot becomes sufficiently self-aware and intelligent to genuinely feel pain or grief, we are ethically bound to do whatever is humanly reasonable to help. There is obviously a very broad and undefined ethical middle ground here. It may be helpful to consider an analagous situation in the animal world.

For instance, it is now considered cruel to starve and beat a pet dog, and a person can even be arrested and fined for doing so! Instead, we are expected to anthropomorphize them (i.e. think of them as people) to a certain extent, to ensure that we treat them "humanely" and with a reasonable level of respect for their physical and even emotional needs.

This same process can and should be extended to robots and other artificial intelligences. They may even make this process easier for us by talking with us and sharing their concerns!