Affective computing is the development of systems that recognize, interpret, and simulate human emotions and has advanced rapidly through deep learning and multimodal fusion techniques. Yet this technological progress has significantly outpaced foundational understanding of what emotions are, how they function socially, and what it means for machines to simulate them. This position paper argues that current affective systems are critically flawed because they rely on correlational patterns rather than established psychological theories of affect, are trained on biased data that systematically fails to generalize across demographic groups, and operate within inadequate ethical and regulatory frameworks that cannot protect emotional privacy or prevent harm. Drawing on empirical work in affective science, computational fairness research, and philosophical accounts of emotional expression, we argue for a ``theory-first'' approach that integrates psychological models, mandates rigorous fairness auditing across intersectional demographics, treats affective data as a protected category requiring heightened safeguards, and recognizes fundamental limits to emotion recognition systems. Without such grounding, affective computing risks systematically encoding bias, enabling emotional manipulation, and eroding authentic human connections that define meaningful social experience.