Skip to content

libo-huang/Awesome-Incremental-Generative-Learning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

38 Commits
Β 
Β 

Repository files navigation

Awesome Incremental / Continual / Lifelong Generative Learning

πŸ“Œ Outline

πŸ“• Paper

  2024 | 2023 | 2022 | 2021 | 2020 | 2019 | 2018 | 2017 | Pre-2017

πŸ‘ Contribute


πŸ“• Paper

2024

  • (AAAI 2024) eTag: Class-Incremental Learning via Embedding Distillation and Task-Oriented Generation [paper] [code]
  • (CVPR 2024) SDDGR: Stable Diffusion-based Deep Generative Replay for Class Incremental Object Detection [paper]
  • (CVPR 2024) Generative Multi-modal Models are Good Class Incremental Learners [paper] [code]
  • (CVPR 2024) Online Task-Free Continual Generative and Discriminative Learning via Dynamic Cluster Memory [paper] [code]
  • (ICML 2024) COPAL: Continual Pruning in Large Language Generative Models [paper]
  • (ACMMM 2024) Generating Prompts in Latent Space for Rehearsal-free Continual Learning [paper] [code]
  • (TMLR 2024) Continual Diffusion: Continual Customization of Text-to-Image Diffusion with C-LoRA [paper]
  • (ICLR-tiny paper 2024) KFC: Knowledge Reconstruction and Feedback Consolidation Enable Efficient and Effective Continual Generative Learning [paper] [code]
  • (arXiv 2024) Diffusion Model Meets Non-Exemplar Class-Incremental Learning and Beyond [paper]
  • (arXiv 2024) DiffClass: Diffusion-Based Class Incremental Learning [paper]
  • (arXiv 2024) CLIP with Generative Latent Replay: a Strong Baseline for Incremental Learning [paper]

2023

  • (ICCV 2023) Lfs-gan: Lifelong few-shot image generation [paper] [code]
  • (ICCV 2023) Generating Instance-level Prompts for Rehearsal-free Continual Learning [paper] [code]
  • (ICCV 2023) When Prompt-based Incremental Learning Does Not Meet Strong Pretraining [paper] [code]
  • (ICCV 2023) What does a platypus look like? Generating customized prompts for zero-shot image classification [paper] [code]
  • (ICML 2023) Poisoning Generative Replay in Continual Learning to Promote Forgetting [paper] [code]
  • (ICML 2023) DDGR: Continual Learning with Deep Diffusion-based Generative Replay [paper] [code]
  • (NeurIPS 2023) Selective Amnesia: A Continual Learning Approach to Forgetting in Deep Generative Models [paper] [code]
  • (ICLR 2023) Better Generative Replay for Continual Federated Learning [paper] [code]
  • (Neural Network 2023) Generative negative replay for continual learning [paper] [code]

2022

  • (ECCV 2022) Generative Negative Text Replay for Continual Vision-Language Pretraining [paper]
  • (ACL 2022) Continual Sequence Generation with Adaptive Compositional Modules [paper] [code]
  • (Journal of Image 2022) Unified probabilistic deep continual learning through generative replay and open set recognition [paper] [code]
  • (CoLLAs 2022) Continual Learning with Foundation Models: An Empirical Study of Latent Replay [paper] [code]
  • (ACMMM 2022) Semantics-Driven Generative Replay for Few-Shot Class Incremental Learning [paper]
  • (CVPR 2022) Learning to imagine: Diversify memory for incremental learning using unlabeled data [paper]

2021

  • (CVPR 2021) Hyper-LifelongGAN: Scalable Lifelong Learning for Image Conditioned Generation [paper]
  • (CVPR 2021) Efficient Feature Transformations for Discriminative and Generative Continual Learning [paper] [code]
  • (ICCV 2021) Always Be Dreaming: A New Approach for Data-Free Class-Incremental Learning [paper] [code]
  • (NeurIPS 2021) CAM-GAN: Continual Adaptation Modules for Generative Adversarial Networks [paper] [code]
  • (NeurIPS 2021) Generative vs. Discriminative: Rethinking The Meta-Continual Learning [paper] [code]
  • (IJCNN 2021) Generative Feature Replay with Orthogonal Weight Modification for Continual Learning [paper]

2020

  • (CVPR 2020) Dreaming to distill: Data-free knowledge transfer via deepinversion [paper] [code]
  • (ECCV 22020) Piggyback GAN: Efficient Lifelong Learning for Image Conditioned Generation [paper]
  • (NeurIPS 2020) GAN memory with no forgetting [paper] [code]
  • (Nature Communications 2020) Brain-inspired replay for continual learning with artificial neural networks [paper] [code]
  • (Neurocomputing 2020) Lifelong generative modeling [paper] [code]
  • (IJCNN 2020) Catastrophic forgetting and mode collapse in GANs [paper] [code]
  • (PhD Thesis 2020) Continual Learning: Tackling Catastrophic Forgetting in Deep Neural Networks with Replay Processes [paper]
  • (ICLR-W 2020) Brain-like replay for continual learning with artificial neural networks [paper]

2019

  • (CVPR 2019) Learning to remember: A synaptic plasticity driven framework for continual learning [paper] [code]
  • (IJCAI 2019) Closed-loop Memory GAN for Continual Learning [paper]
  • (ICCV 2019) Lifelong GAN: Continual learning for conditional image generation [paper]
  • (IJCAI 2019) Complementary Learning for Overcoming Catastrophic Forgetting Using Experience Replay [paper]
  • (NeurIPS 2019) Continual Unsupervised Representation Learning [paper] [code]
  • (IJCNN 2019) Generative Models from the perspective of Continual Learning [paper] [code]
  • (ICANN 2019) Marginal replay vs conditional replay for continual learning [paper]

2018

  • (ICLR 2018) Variational Continual Learning [paper] [code]
  • (ICLR 2018) Memorization precedes generation: Learning unsupervised GANs with memory networks [paper] [code]
  • (NeurIPS 2018) Memory Replay GANs: Learning to Generate New Categories without Forgetting [paper] [code]
  • (BMVC 2018) Exemplar-Supported Generative Reproduction for Class Incremental Learning [paper] [code]
  • (NeurIPS-W 2018) Improving and Understanding Variational Continual Learning [paper] [code]
  • (NeurIPS-W 2018) Continual Classification Learning Using Generative Models [paper]
  • (NeurIPS-W 2018) Self-Supervised GAN to Counter Forgetting [paper]
  • (arXiv 2018) Generative replay with feedback connections as a general strategy for continual learning [paper] [code]

2017

  • (NeurIPS 2017) Continual Learning with Deep Generative Replay [paper]
  • (arXiv 2017) Continual Learning in Generative Adversarial Nets [paper]

Pre-2017

  • (Connection Science 1995) Catastrophic Forgetting, Rehearsal and Pseudorehearsal [paper]

πŸ‘ Contribute [chinese version]

1. Fork the Repository: Click on the Fork button in the top-right corner to create a copy of the repository in your GitHub account.

2. Create a New Branch: In your forked repository, create a new branch (e.g., "libo") by using the branch selector button near the top-left (usually labeled master or main).

3. Make Your Changes: Switch to your new branch using the same selector. Then, click the Edit file button at the top right and make your changes. Add entries in the following format:

- (**journal/conference_name year**) paper_name [[paper](online_paper_link)] [[code](online_code_link)]

4. Commit Changes: Save your changes by clicking the Commit changes button in the upper-right corner. Enter a commit message (e.g., "add 1 cvpr'24 paper") and an extended description if necessary, then confirm your changes by clicking the Commit changes button again at the bottom right.

5. Create a Pull Request: Go back to your forked repository and click Compare & pull request. Alternatively, select your branch from the branch selector and click Open pull request from the Contribute drop-down menu. Fill out the title and description for your pull request, and click Create pull request to submit it.

xxx