Monday 10 March 2014

A plain English guide to how natural language processing will transform computing

brain and gears
 By

Buzz phrases such as “artificial intelligence,” “machine learning” and “natural language processing” are becoming increasingly commonplace within the tech industry. There is a lot of ambiguity around these phrases, so I’ll explain the substance behind the technologies and why I believe they’re transforming the way we live, work and play.

When Ben graduated from the University of Cambridge in 2007, I left with a compelling sense that the technology I’d been working with for the last five years had the potential to change the world. I’d recently completed a Ph.D. on the application of a new set of tools and techniques from the emerging field of Machine Learning (ML) to a range of tasks involving human languages — a field known as Natural Language Processing (NLP). If this sounds confusing, I’m not surprised! Many of the concepts are inherently complex. However, to try and make things clearer, ML is about building software capable of learning how to perform tasks that are too complex to be solved via traditional programming techniques. For example, during my research I built programs that were able to recognize topics in news text, grade essays, and filter spam email. When the tasks are language focused, we call it NLP.

This represents a fundamental shift in the way software engineers build complex systems. Historically, coding has been about distilling the expert knowledge of the programmer into a series of logical structures that cause the system to respond in predictable ways. For instance, accounting systems follow rules, encoded by software engineers, that automate the process of recording and managing accounts. However, many of the tasks we come up against in our information-saturated digital world require a level of sophistication that can’t be captured in a series of human-engineered logical rules. For instance, if I’m building a system to translate a sequence of text from one language into another, there’s no manageable set of rules I can encode that will solve that problem. However, if I create a framework that allows the software to learn from examples of previously translated sequences to make new translations, then the problem can be solved, at least in principle. In other words, the system distills the expertise it needs to complete the task from the data upon which it’s trained, rather than directly from the programmer, whose authorial role has now fundamentally changed. Evidently this new way of creating complex systems requires a lot of data, but happily the amount of available electronic data for training ML systems is growing at an irrepressible rate.

It may be clear that such systems have potentially profound philosophical implications for their authors. They cause us to question commonly held definitions of understanding, intelligence and even free will. To take a simple example from my own experience, when building an ML system to grade essays, does it matter that the machine doesn’t “understand” the content of the essay in the same way a human being would? If you can demonstrate mathematically that the system is as reliable as an expert examiner, does it matter that the method by which it determines grades is based on subtle interactions between thousands of underlying “features”, without an overseeing sentient mind? What role does sentience actually play in the tasks most of us carry out on a daily basis anyway?

Whatever the philosophical implications, software built around these new technologies is changing our lives, even if we don’t yet know it, and I believe this transformation heralds good news for us as consumers and citizens. These new systems will enable our personal devices to better adapt and anticipate what we need, right down to an individual level. The days of the generic tech experience are numbered. People will expect something completely tailored to them, from text-prediction algorithms that understand the context of what you’re writing to concierge systems that learn to preempt what you want to find, say or do next. In 20 years I believe we’ll be surrounded by invisible systems that mine a wealth of data about every aspect of our lives, constantly learning, adapting and enhancing our decision making, health and general wellbeing.

There are downsides, of course. Data privacy and protection must be taken extremely seriously, and people are understandably wary of computers that can “think” and learn like humans. If algorithms start taking on the roles of teachers, personal assistants and others, does this distance us from each other? I believe we need to wrestle with these questions honestly and openly, and that the debate will ultimately lead us to a better understanding of what it means to be human in a technological world. Academic-sounding ideas like ML and NLP have clear implications for the tech industry and the way we live that extend far beyond our universities and research labs.

No comments:

Post a Comment

Note: only a member of this blog may post a comment.