Products
Support Center

Auto Seed Vl2 Access

| Configuration | Avg Acc | Drop | |----------------------------------------|---------|------| | Full Auto-Seed VL2 | 82.2 | — | | w/o consistency loss (( \mathcalL \textconsist )) | 75.4 | -6.8 | | w/o gradient-conditioned generation (random seeds) | 68.9 | -13.3 | | w/o meta-update of ( G \phi ) | 74.1 | -8.1 | | w/o seed pruning (full memory) | 82.0 | -0.2 (ns) |

[3] Zhou, K., et al. (2022). Learning to prompt for vision-language models. IJCV. auto seed vl2

[5] Zhang, Y., et al. (2024). VLM-CL: A benchmark for continual learning in vision-language models. NeurIPS Datasets Track. | Configuration | Avg Acc | Drop |

[6] von Oswald, J., et al. (2020). Continual learning with hypernetworks. ICLR. After learning ( \mathcalT t )

: Auto-Seed VL2 outperforms all baselines, including ER-VLM with 10× more memory, and beats generative replay by over 13 points on average. The BLEU-4 score on C→F is particularly striking, indicating that generated seeds capture caption semantics well. 6.2 Ablation Study Removing components from Auto-Seed VL2 on C→R:

During continual learning, the model is trained sequentially on each task. After learning ( \mathcalT t ), the model should perform well on all seen tasks ( \mathcalT 1:t ) without access to previous data. We allow a small episodic memory ( M ) (size ( K )) that stores generated seeds , not real examples.

[4] Thengane, V., et al. (2023). Continual-CLIP: Fine-tuning CLIP for continual learning. CVPR Workshop.