Order:
  1.  57
    Captioning Deep Learning Based Encoder-Decoder through Long Short-Term Memory (LSTM).Grimsby Chelsea - forthcoming - International Journal of Scientific Innovation.
    This work demonstrates the implementation and use of an encoder-decoder model to perform a many-to-many mapping of video data to text captions. The many-to-many mapping occurs via an input temporal sequence of video frames to an output sequence of words to form a caption sentence. Data preprocessing, model construction, and model training are discussed. Caption correctness is evaluated using 2-gram BLEU scores across the different splits of the dataset. Specific examples of output captions were shown to demonstrate model generality over (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2.  42
    Deep Learning Based Video Captioning through Encoder-Decoder Based Long Short-Term Memory (LSTM).Grimsby Chelsea - forthcoming - International Journal of Advanced Computer Science and Applications:1-6.
    This work demonstrates the implementation and use of an encoder-decoder model to perform a many-to-many mapping of video data to text captions. The many-to-many mapping occurs via an input temporal sequence of video frames to an output sequence of words to form a caption sentence. Data preprocessing, model construction, and model training are discussed. Caption correctness is evaluated using 2-gram BLEU scores across the different splits of the dataset. Specific examples of output captions were shown to demonstrate model generality over (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3.  26
    Deep Learning Based Video Captioning through Encoder-Decoder Based Long Short-Term Memory (LSTM).Grimsby Chelsea - forthcoming - International Journal of Advance Computer Science and Application.
    This work demonstrates the implementation and use of an encoder-decoder model to perform a many-to-many mapping of video data to text captions. The many-to-many mapping occurs via an input temporal sequence of video frames to an output sequence of words to form a caption sentence. Data preprocessing, model construction, and model training are discussed. Caption correctness is evaluated using 2-gram BLEU scores across the different splits of the dataset. Specific examples of output captions were shown to demonstrate model generality over (...)
    Download  
     
    Export citation  
     
    Bookmark