Abstract
The AI ethics of statistical fairness is an error, the approach should be abandoned, and the accumulated academic work deleted. The argument proceeds by identifying four recurring mistakes within statistical fairness. One conflates fairness with equality, which confines thinking to similars being treated similarly. The second and third errors derive from a perspectival ethical view which functions by negating others and their viewpoints. The final mistake constrains fairness to work within predefined social groups instead of allowing unconstrained fairness to subsequently define group composition. From the nature of these misconceptions, the larger argument follows. Because the errors are integral to how statistical fairness works, attempting to resolve the difficulties only deepens them. Consequently, the errors cannot be corrected without undermining the larger project, and statistical fairness collapses from within. While the collapse ends a failure in ethics, it also provokes distinct possibilities for fairness, data, and algorithms. Quickly indicating some of these directions is a secondary aim of the paper, and one that aligns with what fairness has consistently meant and done since Aristotle.