dc.description.abstract |
To investigate the effcacy of Information Sharing Schemes in Decentralized Multi-Agent Systems. We mainly focused on a Multi-Agent framework for Multi-Task Learning and then moved to Information sharing with Experience-based Agent Modeling. Multitask Learning for Convolutional Neural Networks has been shown to improve the performance of multiple related tasks. Traditional Multi-task CNN models consider a small number of tasks that are selected by an assumption of high task commonality. Such an assumption leads to a rigid model structure that cannot accommodate a large number of tasks. Due to their rigid structure and qualitative task grouping method, a new task which was not considered before cannot bene t from these multi-task learning models. To counter these problems, we propose Multi-Agent Multi-task Learning for Convolutional Neural Networks (MAMT) which learns task commonality and circumvents qualitative task grouping. MAMT treats each task as a separate agent that allows task-wise incremental training. By treating each task as an agent, MAMT learns a cross-task connection for every layer in the deep network. MAMT's edible structure can accommodate new tasks and is suitable for a large number of tasks. Our approach applies to both Heterogeneous and Homogeneous tasks. We validate our model on benchmark datasets on which it outperforms traditional multi-task and single-task CNN models. We also demonstrate the e cacy of MAMT on unbalanced and small datasets. |
en_US |