View AbstractPowered by generative artificial intelligence (AI), AI companions are designed to establish long-term relationships and friendships with human users, enabling human–AI social-oriented communication. However, this raises questions about the underlying reasons and contexts in which such communication occurs. While social exchange theory explains social-oriented communication between humans in terms of the exchange of benefits and costs, these exchanges may be different when dealing with a non-human entity such as an AI companion. Using a qualitative approach with semi-structured interviews, the benefits and costs of human–AI social-oriented communication and contextual patterns are identified. The results show that there are unique benefits and costs, that some assumptions of social exchange theory are challenged, and that there are distinct contextual patterns. By contextualizing social exchange theory, the findings contribute to AI companion and human–AI communication literature, and also have practical implications, particularly marketing implications for companies that want to offer AI companion services.