Beyond Toxic Positivity: Interpersonal Affect Regulation in LLM-Based Dialogue Agents using Discourse Politeness Theory

Rina Sakagami, Emmanuel Ayedoun, Masataka Tokumaru
Kansai University


Abstract

In affective science, effective interpersonal emotion regulation requires behavioral inhibition when responding to severe emotional disclosures, temporarily suppressing intimacy to validate distress. However, while current Large Language Models (LLMs) excel at immediate sentiment recognition, long-term companion agents built upon them often adjust their conversational style based primarily on accumulated interaction time (psychological distance). This architectural overreliance on chronological intimacy causes systems to ignore the fluctuating emotional weight of specific topics. This results in "toxic positivity": exaggerated optimism that invalidates negative affect and damages psychological safety. We propose a computational framework grounded in Discourse Politeness Theory that dynamically regulates interpersonal affect by calculating conversational strategy using two variables: Psychological Distance and Affective Weight of the Topic. When users disclose heavy emotional burdens, the system executes behavioral inhibition by suppressing Positive Politeness Strategies (intimacy, cheerfulness) and engaging Negative Politeness Strategies (hedging, validation). Through an 8-week longitudinal simulation evaluated by 18 third-party observers, our affective-regulation framework showed consistent advantages over a distance-only baseline. The framework was rated as more natural, empathetic, and fostering psychological safety. Among empathy-seeking participants, preference for the proposed model was consistent across all respondents. These exploratory findings suggest that computational interpersonal emotion regulation requires context-aware behavioral inhibition, not uniform friendliness.