I listened to a podcast on a woman with no fear. She could intellectually comprehend of harm, but did not feel the emotion fear. They carefully guarded her identity, because it was really easy to take advantage of her. If your AIs are going to be grounded in a human-like mentality then they need those emotions to function. You could maybe cap them (do you _really_ need to be capable of experiencing more-pain-than-any-mortal-can-comprehend?), but you'd need them.Fear and pain seem somewhat unnecessary to me, and could be replicated with programming or system notifications. It strikes me that designers might debate on making a machine capable of "suffering" in the same way a human can and that most would probably find the idea objectionable.
Having AI intellects that pursue totally abstract goals ("this course of action could harm me. It is illogical to be harmed without other consequences, so I will not pursue it") sounds like a good way to run afoul of AI alignment and end up with a world made of paper clips.