Multi-output learning (MoL) aims to predict multiple outputs for an input, where the output values are characterized by diverse data types, such as binary, nominal, ordinal and real-valued variables. Such learning tasks arise in a variety of real-world applications, ranging from document classification, computer emulation, sensor network analysis, concept-based information retrieval, human action/causal induction, to video analysis, image annotation/retrieval, gene function prediction and brain science. Due to its popularity in applications, multi-output learning has also been widely explored in machine learning community, such as multi-label/multi-class classification, multi-target regression, hierarchical classification with class taxonomies, label sequence learning, sequence alignment learning, supervised grammar learning, and so on.
The theoretical properties of existing approaches for multi-output data are still not well understood. This triggers practitioners to develop novel methodologies and theories to deeply understand multi-output learning tasks. Moreover, the emerging trends of ultrahigh input and output dimensionality, and the complexly structured objects, lead to formidable challenges for multi-output learning. Therefore, it is imperative to propose practical mechanisms and efficient optimization algorithms for large-scale applications. Deep learning has gained much popularity in today’s research, and has been developed in recent years to deal with multi-label and multi-class classification problems. However, it remains non-trivial for practitioners to design novel deep neural networks that are appropriate for more comprehensive multi-output learning domains.
This workshop aims to publish state-of-the-art scientific works along this direction. We welcome all the original submissions with significant novel results, focusing on modelling, algorithm, theory, and real-world applications in the field of multi-output learning.
Topics of Interest
Interested topics include, but are not limited to:
Novel deep learning methods for multi-output learning tasks.
Novel modellings for multi-output learning from new perspectives.
Statistical theory analysis for multiple output learning.
Large-scale optimization algorithms for multiple output learning.
Sparse representation learning for large-scale multiple output learning.
Active learning for multi-output data.
Online learning for multi-output data.
Metric learning for multi-output data.
Multi-output learning with noisy data.
Multi-output learning with imbalanced data.
Workshop submissions and camera ready versions will be handled by Microsoft CMT. Click https://cmt3.research.microsoft.com/IJCAIMOL2019 for submission.
Papers should be formatted according to the IJCAI formatting instructions for the Conference Track. The submissions with 2 pages will be considered for the poster, while the submissions with at least 4 pages will be considered for the oral presentation.
IJCAI-MoL is a non-archival venue and there will be no published proceedings. The papers will be posted on the workshop website. It will be possible to submit to other conferences and journals both in parallel to and after IJCAI-MoL’19. Besides, we also welcome submissions to IJCAI-MoL that are under review at other conferences and workshops.
At least one author from each accepted paper must register for the workshop. Please see the IJCAI 2019 Website for information about accommodation and registration.
Date: date Aug 12, 2019
Venue: Room Sicily 2406 at The Venetian Macao Resort Hotel
|08:45-09:20||Keynote Talk 1|
|Title: Syntactically-Meaningful and Transferable Recursive Neural Networks for Fine-grained Sentiment Analysis|
|Speaker: Sinno Jialin Pan (Nanyang Technological University)|
|09:20-09:40||Paper Talk 1|
|Title: Using Evolutionary Multi-label Classification to Predict Malfunctions in Virtual Storage Systems Monitoring|
|Authors: Yu Bai, Michael Bain|
|09:40-10:15||Keynote Talk 2|
|Speaker: Vladimir Pavlovic (Rutgers University)|
|10:35-10:55||Paper Talk 2|
|Title: Which Ones Are Speaking? Integrating Speaker Information for Multi-talker Speech Separation|
|Authors: Jing Shi|
|10:55-11:30||Keynote Talk 3|
|Title: Extreme Classification|
|Speaker: Manik Varma (Microsoft Research India)|
|11:30-11:50||Paper Talk 3|
|Title: Sequence Learning to Estimate time of Travel and Stop|
|Authors: Jessie Sun, Shujuan Sun|
|11:50-12:30||Panel Discussion & Concluding Remark|
|Host: Chen Gong|
|Guests: Manik Varma, Vladimir Pavlovic, Ivor W. Tsang|
Submission Deadline: 05:00 PM (Pacific Time), May 16th, 2019
Acceptance Notifications: June 5th, 2019
Camera-ready: August 1st, 2019
Chen Gong, Nanjing University of Science and Technology, China.
Weiwei Liu, University of New South Wales, Australia.
Xiaobo Shen, Nanjing University of Science and Technology, China.
Joey Tianyi Zhou, IHPC, A*STAR, Singapore.
Yew-Soon Ong, Nanyang Technological University, Singapore.
Ivor W. Tsang, University of Technology Sydney, Australia.