Switch to: References

Add citations

You must login to add citations.
  1. Bringing order into the realm of Transformer-based language models for artificial intelligence and law.Candida M. Greco & Andrea Tagarelli - forthcoming - Artificial Intelligence and Law:1-148.
    Transformer-based language models (TLMs) have widely been recognized to be a cutting-edge technology for the successful development of deep-learning-based solutions to problems and applications that require natural language processing and understanding. Like for other textual domains, TLMs have indeed pushed the state-of-the-art of AI approaches for many tasks of interest in the legal domain. Despite the first Transformer model being proposed about six years ago, there has been a rapid progress of this technology at an unprecedented rate, whereby BERT and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A novel MRC framework for evidence extracts in judgment documents.Yulin Zhou, Lijuan Liu, Yanping Chen, Ruizhang Huang, Yongbin Qin & Chuan Lin - 2024 - Artificial Intelligence and Law 32 (1):147-163.
    Evidences are important proofs to support judicial trials. Automatically extracting evidences from judgement documents can be used to assess the trial quality and support “Intelligent Court”. Current evidence extraction is primarily depended on sequence labelling models. Despite their success, they can only assign a label to a token, which is difficult to recognize nested evidence entities in judgment documents, where a token may belong to several evidences at the same time. In this paper, we present a novel evidence extraction architecture (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations