Re-se-arch
Our re-se-arch has been generously supported by ARO, NSF, ARFL, IARPA, BlueHalo and Salesforce.
2023
Savadikar, Chinmay; Dai, Michelle; Wu, Tianfu
Learning to Grow Artificial Hippocampi in Vision Transformers for Resilient Lifelong Learning Online
2023, visited: 14.03.2023.
@online{artihippo,
title = {Learning to Grow Artificial Hippocampi in Vision Transformers for Resilient Lifelong Learning},
author = {Chinmay Savadikar and Michelle Dai and Tianfu Wu},
url = {https://arxiv.org/pdf/2303.08250.pdf},
year = {2023},
date = {2023-03-14},
urldate = {2023-03-14},
abstract = {Lifelong learning without catastrophic forgetting (i.e., resiliency) possessed by human intelligence is entangled with sophisticated memory mechanisms in the brain, especially the long-term memory (LM) maintained by Hippocampi. To a certain extent, Transformers have emerged as the counterpart “Brain” of Artificial Intelligence (AI), and yet leave the LM component under-explored for lifelong learning settings. This paper presents a method of learning to grow Artificial Hippocampi
(ArtiHippo) in Vision Transformers (ViTs) for resilient lifelong learning. With a comprehensive ablation study, the final linear projection layer in the multi-head self-attention (MHSA) block is selected in realizing and growing ArtiHippo. ArtiHippo is represented by a mixture of experts (MoEs). Each expert component is an on-site variant of the linear projection layer, which is maintained via neural architecture search (NAS) with the search space defined by four basic growing operations \textendash skip, reuse, adapt, and new in lifelong learning. The LM of a task consists of two parts: the dedicated expert components (as model parameters) at different layers of a ViT learned via NAS, and the mean class-tokens (as stored latent vectors for measuring task similarity) associated with the expert components. For a new task, a hierarchical task-similarity-oriented exploration-exploitation sampling based NAS is proposed to learn the expert components. The task similarity is measured based on the normalized cosine similarity between the mean class-token of the new task and those of old tasks. The proposed method is complementary to prompt-based lifelong learning with ViTs. In experiments, the proposed method is tested on the challenging Visual Domain Decathlon (VDD) benchmark and the recently proposed 5-Dataset benchmark. It obtains consistently better performance than the prior art with sensible ArtiHippo learned continually},
howpublished = {arXiv preprint},
keywords = {},
pubstate = {published},
tppubtype = {online}
}
Lifelong learning without catastrophic forgetting (i.e., resiliency) possessed by human intelligence is entangled with sophisticated memory mechanisms in the brain, especially the long-term memory (LM) maintained by Hippocampi. To a certain extent, Transformers have emerged as the counterpart “Brain” of Artificial Intelligence (AI), and yet leave the LM component under-explored for lifelong learning settings. This paper presents a method of learning to grow Artificial Hippocampi
(ArtiHippo) in Vision Transformers (ViTs) for resilient lifelong learning. With a comprehensive ablation study, the final linear projection layer in the multi-head self-attention (MHSA) block is selected in realizing and growing ArtiHippo. ArtiHippo is represented by a mixture of experts (MoEs). Each expert component is an on-site variant of the linear projection layer, which is maintained via neural architecture search (NAS) with the search space defined by four basic growing operations – skip, reuse, adapt, and new in lifelong learning. The LM of a task consists of two parts: the dedicated expert components (as model parameters) at different layers of a ViT learned via NAS, and the mean class-tokens (as stored latent vectors for measuring task similarity) associated with the expert components. For a new task, a hierarchical task-similarity-oriented exploration-exploitation sampling based NAS is proposed to learn the expert components. The task similarity is measured based on the normalized cosine similarity between the mean class-token of the new task and those of old tasks. The proposed method is complementary to prompt-based lifelong learning with ViTs. In experiments, the proposed method is tested on the challenging Visual Domain Decathlon (VDD) benchmark and the recently proposed 5-Dataset benchmark. It obtains consistently better performance than the prior art with sensible ArtiHippo learned continually
(ArtiHippo) in Vision Transformers (ViTs) for resilient lifelong learning. With a comprehensive ablation study, the final linear projection layer in the multi-head self-attention (MHSA) block is selected in realizing and growing ArtiHippo. ArtiHippo is represented by a mixture of experts (MoEs). Each expert component is an on-site variant of the linear projection layer, which is maintained via neural architecture search (NAS) with the search space defined by four basic growing operations – skip, reuse, adapt, and new in lifelong learning. The LM of a task consists of two parts: the dedicated expert components (as model parameters) at different layers of a ViT learned via NAS, and the mean class-tokens (as stored latent vectors for measuring task similarity) associated with the expert components. For a new task, a hierarchical task-similarity-oriented exploration-exploitation sampling based NAS is proposed to learn the expert components. The task similarity is measured based on the normalized cosine similarity between the mean class-token of the new task and those of old tasks. The proposed method is complementary to prompt-based lifelong learning with ViTs. In experiments, the proposed method is tested on the challenging Visual Domain Decathlon (VDD) benchmark and the recently proposed 5-Dataset benchmark. It obtains consistently better performance than the prior art with sensible ArtiHippo learned continually
2019
Li, Xilai; Zhou, Yingbo; Wu, Tianfu; Socher, Richard; Xiong, Caiming
Learn to Grow: A Continual Structure Learning Framework for Overcoming Catastrophic Forgetting Proceedings Article
In: International Conference on Machine Learning (ICML), 2019.
@inproceedings{Learn2grow,
title = {Learn to Grow: A Continual Structure Learning Framework for Overcoming Catastrophic Forgetting},
author = {Xilai Li and Yingbo Zhou and Tianfu Wu and Richard Socher and Caiming Xiong},
url = {https://arxiv.org/abs/1904.00310
https://news.ncsu.edu/2019/05/ai-continual-learning-framework/
https://www.army.mil/article/222090/army_funded_research_boosts_memory_of_ai_systems
https://news.science360.gov/archives/20190517
https://techxplore.com/news/2019-05-framework-artificial-intelligence.html
https://www.wraltechwire.com/2019/05/15/researchers-create-framework-to-help-artificial-intelligence-systems-be-less-forgetful/},
year = {2019},
date = {2019-06-11},
booktitle = {International Conference on Machine Learning (ICML)},
abstract = {Addressing catastrophic forgetting is one of the key challenges in continual learning where machine learning systems are trained with sequential or streaming tasks. Despite recent remarkable progress in state-of-the-art deep learning, deep neural networks (DNNs) are still plagued with the catastrophic forgetting problem. This paper presents a conceptually simple yet general and effective framework for handling catastrophic forgetting in continual learning with DNNs. The proposed method consists of two components: a neural structure optimization component and a parameter learning and/or fine-tuning component. By separating the explicit neural structure learning and the parameter estimation, not only is the proposed method capable of evolving neural structures in an intuitively meaningful way, but also shows strong capabilities of alleviating catastrophic forgetting in experiments. Furthermore, the proposed method outperforms all other baselines on the permuted MNIST dataset, the split CIFAR100 dataset and the Visual Domain Decathlon dataset in continual learning setting.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Addressing catastrophic forgetting is one of the key challenges in continual learning where machine learning systems are trained with sequential or streaming tasks. Despite recent remarkable progress in state-of-the-art deep learning, deep neural networks (DNNs) are still plagued with the catastrophic forgetting problem. This paper presents a conceptually simple yet general and effective framework for handling catastrophic forgetting in continual learning with DNNs. The proposed method consists of two components: a neural structure optimization component and a parameter learning and/or fine-tuning component. By separating the explicit neural structure learning and the parameter estimation, not only is the proposed method capable of evolving neural structures in an intuitively meaningful way, but also shows strong capabilities of alleviating catastrophic forgetting in experiments. Furthermore, the proposed method outperforms all other baselines on the permuted MNIST dataset, the split CIFAR100 dataset and the Visual Domain Decathlon dataset in continual learning setting.
- https://arxiv.org/abs/1904.00310
- https://news.ncsu.edu/2019/05/ai-continual-learning-framework/
- https://www.army.mil/article/222090/army_funded_research_boosts_memory_of_ai_sys[...]
- https://news.science360.gov/archives/20190517
- https://techxplore.com/news/2019-05-framework-artificial-intelligence.html
- https://www.wraltechwire.com/2019/05/15/researchers-create-framework-to-help-art[...]