Switch to: Citations

Add references

You must login to add references.
  1. Automatically classifying case texts and predicting outcomes.Kevin D. Ashley & Stefanie Brüninghaus - 2009 - Artificial Intelligence and Law 17 (2):125-165.
    Work on a computer program called SMILE + IBP (SMart Index Learner Plus Issue-Based Prediction) bridges case-based reasoning and extracting information from texts. The program addresses a technologically challenging task that is also very relevant from a legal viewpoint: to extract information from textual descriptions of the facts of decided cases and apply that information to predict the outcomes of new cases. The program attempts to automatically classify textual descriptions of the facts of legal problems in terms of Factors, a (...)
    Download  
     
    Export citation  
     
    Bookmark   27 citations  
  • Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead.Cynthia Rudin - 2019 - Nature Machine Intelligence 1.
    Download  
     
    Export citation  
     
    Bookmark   108 citations  
  • Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation.Sandra Wachter, Brent Mittelstadt & Luciano Floridi - 2017 - International Data Privacy Law 1 (2):76-99.
    Since approval of the EU General Data Protection Regulation (GDPR) in 2016, it has been widely and repeatedly claimed that the GDPR will legally mandate a ‘right to explanation’ of all decisions made by automated or artificially intelligent algorithmic systems. This right to explanation is viewed as an ideal mechanism to enhance the accountability and transparency of automated decision-making. However, there are several reasons to doubt both the legal existence and the feasibility of such a right. In contrast to the (...)
    Download  
     
    Export citation  
     
    Bookmark   63 citations  
  • Explaining Explanations in AI.Brent Mittelstadt - forthcoming - FAT* 2019 Proceedings 1.
    Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it’s important to remember Box’s maxim that "All models are wrong but some are useful." We focus on (...)
    Download  
     
    Export citation  
     
    Bookmark   44 citations  
  • Fair, Transparent, and Accountable Algorithmic Decision-making Processes: The Premise, the Proposed Solutions, and the Open Challenges.Bruno Lepri, Nuria Oliver, Emmanuel Letouzé, Alex Pentland & Patrick Vinck - 2018 - Philosophy and Technology 31 (4):611-627.
    The combination of increased availability of large amounts of fine-grained human behavioral data and advances in machine learning is presiding over a growing reliance on algorithms to address complex societal problems. Algorithmic decision-making processes might lead to more objective and thus potentially fairer decisions than those made by humans who may be influenced by greed, prejudice, fatigue, or hunger. However, algorithmic decision-making has been criticized for its potential to enhance discrimination, information and power asymmetry, and opacity. In this paper, we (...)
    Download  
     
    Export citation  
     
    Bookmark   51 citations  
  • Wrappers for feature subset selection.Ron Kohavi & George H. John - 1997 - Artificial Intelligence 97 (1-2):273-324.
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Data-centric and logic-based models for automated legal problem solving.L. Karl Branting - 2017 - Artificial Intelligence and Law 25 (1):5-27.
    Logic-based approaches to legal problem solving model the rule-governed nature of legal argumentation, justification, and other legal discourse but suffer from two key obstacles: the absence of efficient, scalable techniques for creating authoritative representations of legal texts as logical expressions; and the difficulty of evaluating legal terms and concepts in terms of the language of ordinary discourse. Data-centric techniques can be used to finesse the challenges of formalizing legal rules and matching legal predicates with the language of ordinary parlance by (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • A Survey of Methods for Explaining Black Box Models.Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti & Dino Pedreschi - 2019 - ACM Computing Surveys 51 (5):1-42.
    Download  
     
    Export citation  
     
    Bookmark   51 citations