Zaid Qureshi, Vikram Sharma Mailthody, et al.
ASPLOS 2023
Classical machine-learning auto-tuners for OS control struggle with semantic gaps, brittle rewards, and unsafe exploration. We introduce an online, LLM-driven agent that emulates expert reasoning for continuous OS optimization. When tuning the Linux Completely Fair Scheduler’s hyperparameters, the agent outperforms Bayesian optimization by 5% in single-parameter tuning, 7.1% in two-parameter co-tuning, and a human expert by 2.98% overall, while converging faster and adapting more quickly to workload changes. When application counters are unavailable, system-level proxies (e.g., Instructions Per Cycle (IPC)) preserved tail latency in our setup. Putting this together, we propose adopting the Model Context Protocol (MCP) for tool/resource discovery and invocation and a logging channel; on top of that, we propose adding transactional apply--commit--revert, host-mediated approval gates, and policy controls in the OS-tuning server and host to ensure safe, auditable operation. Our results and reference design suggest a practical path toward safe, self‑adapting OS control.
Zaid Qureshi, Vikram Sharma Mailthody, et al.
ASPLOS 2023
Aditya Bhosale, Laxmikant Kale, et al.
FlexScience 2025
Takumi Yanagawa, Chris Butler
KubeCon Japan 2025
Apoorve Mohan, Ming-Hung Chen, et al.
SC 2025