Unsolved Problems in ML Safety release_kwloyav6dvbrtda6taamuqvp34

by Dan Hendrycks and Nicholas Carlini and John Schulman and Jacob Steinhardt

Released as a article .

2021  

Abstract

Machine learning (ML) systems are rapidly increasing in size, are acquiring new capabilities, and are increasingly deployed in high-stakes settings. As with other powerful technologies, safety for ML should be a leading research priority. In response to emerging safety challenges in ML, such as those introduced by recent large-scale models, we provide a new roadmap for ML Safety and refine the technical problems that the field needs to address. We present four problems ready for research, namely withstanding hazards ("Robustness"), identifying hazards ("Monitoring"), steering ML systems ("Alignment"), and reducing risks to how ML systems are handled ("External Safety"). Throughout, we clarify each problem's motivation and provide concrete research directions.
In text/plain format

Archived Files and Locations

application/pdf  1.9 MB
file_3hbroh5le5arlhpjzj6yywn3iy
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2021-09-28
Version   v1
Language   en ?
arXiv  2109.13916v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 4fc8aa8d-7320-4bbb-9ffe-4802736563b3
API URL: JSON