Unsolved Problems in ML Safety
release_kwloyav6dvbrtda6taamuqvp34
by
Dan Hendrycks and Nicholas Carlini and John Schulman and Jacob Steinhardt
2021
Abstract
Machine learning (ML) systems are rapidly increasing in size, are acquiring
new capabilities, and are increasingly deployed in high-stakes settings. As
with other powerful technologies, safety for ML should be a leading research
priority. In response to emerging safety challenges in ML, such as those
introduced by recent large-scale models, we provide a new roadmap for ML Safety
and refine the technical problems that the field needs to address. We present
four problems ready for research, namely withstanding hazards ("Robustness"),
identifying hazards ("Monitoring"), steering ML systems ("Alignment"), and
reducing risks to how ML systems are handled ("External Safety"). Throughout,
we clarify each problem's motivation and provide concrete research directions.
In text/plain
format
Archived Files and Locations
application/pdf 1.9 MB
file_3hbroh5le5arlhpjzj6yywn3iy
|
arxiv.org (repository) web.archive.org (webarchive) |
2109.13916v1
access all versions, variants, and formats of this works (eg, pre-prints)