Switch to: References

Add citations

You must login to add citations.
  1. Self-training improves few-shot learning in legal artificial intelligence tasks.Yulin Zhou, Yongbin Qin, Ruizhang Huang, Yanping Chen, Chuan Lin & Yuan Zhou - forthcoming - Artificial Intelligence and Law:1-17.
    As the labeling costs in legal artificial intelligence tasks are expensive. Therefore, it becomes a challenge to utilize low cost to train a robust model. In this paper, we propose a LAIAugment approach, which aims to enhance the few-shot learning capability in legal artificial intelligence tasks. Specifically, we first use the self-training approach to label the amount of unlabelled data to enhance the feature learning capability of the model. Moreover, we also search for datasets that are similar to the training (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Bringing order into the realm of Transformer-based language models for artificial intelligence and law.Candida M. Greco & Andrea Tagarelli - 2024 - Artificial Intelligence and Law 32 (4):863-1010.
    Transformer-based language models (TLMs) have widely been recognized to be a cutting-edge technology for the successful development of deep-learning-based solutions to problems and applications that require natural language processing and understanding. Like for other textual domains, TLMs have indeed pushed the state-of-the-art of AI approaches for many tasks of interest in the legal domain. Despite the first Transformer model being proposed about six years ago, there has been a rapid progress of this technology at an unprecedented rate, whereby BERT and (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations