Towards Using Large Language Models and Deep Reinforcement Learning for Inertial Fusion EnergyVadim ElisseevMax Espositoet al.2024NeurIPS 2024
AIHWKIT-Lightning: A Scalable HW-Aware Training Toolkit for Analog In-Memory ComputingJulian BüchelWilliam Simonet al.2024NeurIPS 2024
SELF-BART : A Transformer-based Molecular Representation Model using SELFIESIndra Priyadarsini SSeiji Takedaet al.2024NeurIPS 2024
On the role of noise in factorizers for disentangling distributed representationsKumudu Geethan KarunaratneMichael Herscheet al.2024NeurIPS 2024
Guaranteeing Conservation Laws with Projection in Physics-Informed Neural NetworksAnthony BaezWang Zhanget al.2024NeurIPS 2024
Regress, Don’t Guess – A Regression-like Loss on Number Tokens for Language ModelsJonas ZausingerLars Penniget al.2024NeurIPS 2024
Predicting LLM Inference Latency: A Roofline-Driven ML MethodSaki ImaiRina Nakazawaet al.2024NeurIPS 2024
Thought of Search: Planning with Language Models Through The Lens of EfficiencyMichael KatzHarsha Kokelet al.2024NeurIPS 2024
Enhancing Reasoning to Adapt Large Language Models for Domain-Specific ApplicationsBo WenXin Zhang2024NeurIPS 2024