Multimodal Analysis of User-Generated Multimedia Content

This book presents a study of semantics and sentics understanding derived from user-generated multimodal content (UGC). It enables researchers to learn about the ways multimodal analysis of UGC can augment semantics and sentics understanding and it helps in addressing several multimedia analytics pr...

Full description

Main Author: Shah, Rajiv,
Other Authors: Zimmermann, Roger,, SpringerLink (Online service)
Format: eBook
Language: English
Published: Cham : Springer International Publishing : Imprint : Springer, 2017.
Physical Description: 1 online resource (xxii, 263 pages 63 illustrations, 42 illustrations in color).
Series: Socio-affective computing ; 6.
Subjects:
Table of Contents:
  • Dedication; Foreword; Preface; Acknowledgements; Contents; About the Authors; Abbreviations; Chapter 1: Introduction; 1.1 Background and Motivation; 1.2 Overview; 1.2.1 Event Understanding; 1.2.2 Tag Recommendation and Ranking; 1.2.3 Soundtrack Recommendation for UGVs; 1.2.4 Automatic Lecture Video Segmentation; 1.2.5 Adaptive News Video Uploading; 1.3 Contributions; 1.3.1 Event Understanding; 1.3.2 Tag Recommendation and Ranking; 1.3.3 Soundtrack Recommendation for UGVs; 1.3.4 Automatic Lecture Video Segmentation; 1.3.5 Adaptive News Video Uploading; 1.4 Knowledge Bases and APIs.
  • 1.4.1 FourSquare1.4.2 Semantics Parser; 1.4.3 SenticNet; 1.4.4 WordNet; 1.4.5 Stanford POS Tagger; 1.4.6 Wikipedia; 1.5 Roadmap; References; Chapter 2: Literature Review; 2.1 Event Understanding; 2.2 Tag Recommendation and Ranking; 2.3 Soundtrack Recommendation for UGVs; 2.4 Lecture Video Segmentation; 2.5 Adaptive News Video Uploading; References; Chapter 3: Event Understanding; 3.1 Introduction; 3.2 System Overview; 3.2.1 EventBuilder; 3.2.2 EventSensor; 3.3 Evaluation; 3.3.1 EventBuilder; 3.3.2 EventSensor; 3.4 Summary; References; Chapter 4: Tag Recommendation and Ranking.
  • 4.1 Introduction4.1.1 Tag Recommendation; 4.1.2 Tag Ranking; 4.2 System Overview; 4.2.1 Tag Recommendation; 4.2.2 Tag Ranking; 4.3 Evaluation; 4.3.1 Tag Recommendation; 4.3.2 Tag Ranking; 4.4 Summary; References; Chapter 5: Soundtrack Recommendation for UGVs; 5.1 Introduction; 5.2 Music Video Generation; 5.2.1 Scene Moods Prediction Models; 5.2.1.1 Geo and Visual Features; 5.2.1.2 Scene Moods Classification Model; 5.2.1.3 Scene Moods Recognition; 5.2.2 Music Retrieval Techniques; 5.2.2.1 Heuristic Method for Soundtrack Retrieval; 5.2.2.2 Post-Filtering with User Preferences.
  • 5.2.3 Automatic Music Video Generation Model5.3 Evaluation; 5.3.1 Dataset and Experimental Settings; 5.3.1.1 Emotion Tag Space; 5.3.1.2 GeoVid Dataset; 5.3.1.3 Soundtrack Dataset; 5.3.1.4 Evaluation Dataset; 5.3.2 Experimental Results; 5.3.2.1 Scene Moods Prediction Accuracy; 5.3.2.2 Soundtrack Selection Accuracy; 5.3.3 User Study; 5.4 Summary; References; Chapter 6: Lecture Video Segmentation; 6.1 Introduction; 6.2 Lecture Video Segmentation; 6.2.1 Prediction of Video Transition Cues Using Supervised Learning; 6.2.2 Computation of Text Transition Cues Using -Gram Based Language Model.
  • 6.2.2.1 Preparation6.2.2.2 Title/Sub-Title Text Extraction; 6.2.2.3 Transition Time Recommendation from SRT File; 6.2.3 Computation of SRT Segment Boundaries Using a Linguistic-Based Approach; 6.2.4 Computation of Wikipedia Segment Boundaries; 6.2.5 Transition File Generation; 6.3 Evaluation; 6.3.1 Dataset and Experimental Settings; 6.3.2 Results from the ATLAS System; 6.3.3 Results from the TRACE System; 6.4 Summary; References; Chapter 7: Adaptive News Video Uploading; 7.1 Introduction; 7.2 Adaptive News Video Uploading; 7.2.1 NEWSMAN Scheduling Algorithm; 7.2.2 Rate-Distortion (R-D) Model.