Order:
  1. Prefaces, Knowledge, and Questions.Frank Hong - 2023 - Ergo: An Open Access Journal of Philosophy 10.
    The Preface Paradox is often discussed for its implications for rational belief. Much less discussed is a variant of the Preface Paradox for knowledge. In this paper, I argue that the most plausible closure-friendly resolution to the Preface Paradox for Knowledge is to say that in any given context, we do not know much. I call this view “Socraticism”. I argue that Socraticism is the most plausible view on two accounts—(1) this view is compatible with the claim that most of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  2. Paradoxes of Infinite Aggregation.Frank Hong & Jeffrey Sanford Russell - forthcoming - Noûs.
    There are infinitely many ways the world might be, and there may well be infinitely many people in it. These facts raise moral paradoxes. We explore a conflict between two highly attractive principles: a Pareto principle that says that what is better for everyone is better overall, and a statewise dominance principle that says that what is sure to turn out better is better on balance. We refine and generalize this paradox, showing that the problem is faced by many theories (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. Moral Facts do not Supervene on Non-Moral Qualitative Facts.Frank Hong - 2024 - Erkenntnis:1-11.
    It is very natural to think that if two people, x and y, are qualitatively identical and have committed qualitatively identical actions, then it cannot be the case that one has committed something wrong whereas the other did not. That is to say, if x and y differ in their moral status, then it must be because x and y are qualitatively different, and not simply because x is identical to x and not identical to y. In this fictional dialogue (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. Group Prioritarianism: Why AI should not replace humanity.Frank Hong - 2024 - Philosophical Studies:1-19.
    If a future AI system can enjoy far more well-being than a human per resource, what would be the best way to allocate resources between these future AI and our future descendants? It is obvious that on total utilitarianism, one should give everything to the AI. However, it turns out that every Welfarist axiology on the market also gives this same recommendation, at least if we assume consequentialism. Without resorting to non-consequentialist normative theories that suggest that we ought not always (...)
    Download  
     
    Export citation  
     
    Bookmark