Today's "machine-learning" systems, trained by data, are so effective that we've invited them to see and hear for us--and to make decisions on our behalf. But alarm bells are ringing. Recent years have seen an eruption of concern as the field of machine learning advances. When the systems we attempt to teach will not, in the end, do what we want or what we expect, ethical and potentially existential risks emerge. Researchers call this the alignment problem. The Alignment Problem offers a reckoning with humanity's biases and blind spots, our own unstated assumptions and often contradictory goals. It takes a hard look not only at our technology but at our culture--and finds a story by turns harrowing and hopeful.
Brian Christian, an accomplished technology writer, offers a nuanced and captivating exploration of this white-hot topic, giving us along the way a survey of the state of machine learning and of the challenges it faces ... Mr. Christian reminds us, we must attend to 'the things that are not easily quantified or do not easily admit themselves into our models.' He adds that the 'ineffable need not cede entirely to the explicit'—a timely reminder that even in our age of big data and deep learning, there will always be more things in heaven and earth than are dreamt of in our algorithms.
Christian (The Most Human Human), a writer and lecturer on technology-related issues, delivers a riveting and deeply complex look at artificial intelligence and the significant challenge in creating computer models that 'capture our norms and values.' ... Though it’s tempting to assume a doom-and-gloom outlook while reading of these problems, Christian refreshingly insists that 'our ultimate conclusions need not be grim,' as a new subset of computer scientists 'focused explicitly on the ethics and safety of machine-learning' is working to bridge the gap between human values and AI learning styles. Lay readers will find Christian’s revealing study to be a helpful guide to an urgent problem in tech.