Amar Prakash Azad, Supriyo Ghosh, et al.
IAAI 2022
Agenda: Introduction and Motivation for Studying Parameter-Efficient learning Background: Large-scale Pre-trained and Foundation Models Definition and Theory of parameter-efficient learning Basics of Pre-trained Model Representation Errors Analysis Editing Models with Task Arithmetic Advanced Settings of Task Vectors Multimodal Weights Merging BERT + Hubert for ASR Vit + AST for Acoustic Modeling In-Context Learning Frozen Model Adaptation through long context windows New Approaches on Neural Model Reprogramming Reprogramming for Medical Images and DNA with 1B+ LLM Prompting Large Language Models Connection between prompting and parameter-efficient learning Prompting large language models for reasoning ReAct, Plan-and-Solve, Tree-of-Thought prompting Faithfulness and robustness of LLM reasonings Using LLMs for tool using Automatic evaluation using large language models by prompting LLM evaluation and G-Eval Parameter-Efficient Learning for Speech Processing Adapting text Large Language Models for Speech Processing Adapting text LLM (e.g. LLaMA) for spoken language modeling Prompting and Instruction Tuning on Speech Pre-trained Models Semantic and acoustic tokens for speech language models Prompting and instruction tuning for various speech processing tasks Conclusion and Open Questions Lessons learned: a signal processor wandering in the land of large-scale models Available resources and code for research in parameter-efficient learning
Amar Prakash Azad, Supriyo Ghosh, et al.
IAAI 2022
Turguy Caglar, Sirine Belhaj, et al.
IJCAI 2023
Eduardo Almeida Soares, Dmitry Zubarev, et al.
ICLR 2025
Yan Liu, Xiaokang Chen, et al.
NeurIPS 2023