The Viking village of Berk, a small town in the fictional world of the animated film How to Train Your Dragon, is frequently the target of attacks by—gasp—dragons!
The son of the Viking chief, a young boy named Hiccup, is seen by his father and others as unfit to fight against the onslaught of winged menaces who snatch away valuable livestock and destroy property in their town. To make up for his lack of physical strength and fighting prowess, Hiccup tinkers with gizmos and gadgets that augment his abilities—many of which fail spectacularly.
Yet one day, Hiccup’s fortunes take a turn. One of his inventions, a kind of net launcher, injures a dragon flying over the island, bringing it crashing to the ground. Hiccup tracks down the dragon in the forest. Finally, the time has come for him to join the ranks of Berk’s dragon slayers! But when Hiccup finds the dragon, downed and unable to fly from injuries sustained in the crash, he simply can’t bring himself to kill it. Instead, he uses his tinkering smarts to fashion a kind of prosthesis that allows the dragon to fly again, under Hiccup’s control.
Hiccup interacts with a number of other captive dragons, finding them all to be benevolent and trainable. In fact, the only reason they attack Berk is because one particularly ornery Red Death dragon threatens to gobble them up whole if they don’t. Ultimately, Hiccup’s “pet” dragons help turn the tide against the mean old beast at the center of the circle of violence. The death of the Red Death dragon ushers in a new epoch—one where dragons and the people of Berk can peacefully coexist.
It’s a cute movie, one that can teach us a valuable lesson about peaceful coexistence and jumping to conclusions, particularly when those conclusions are based only on partial information. It’s a lesson we might do well to heed, particularly when it comes to “monsters” of our own creation. I’m talking about Artificial Intelligence (AI).
AI is a kind of intelligence that machines demonstrate. It can involve elements from machine learning, robotics, and a number of other applications in computer science. So far, computers using artificial intelligence have been able to compose their own original pieces of music, create their own language, dominate strategy games, and autonomously operate motor vehicles. And yet, they’re not necessarily benevolent. They’ve also gone on racist tirades, and been called “humanity’s greatest existential threat” and compared to “summoning the demon” by Elon Musk.
But perhaps AI is neither friend nor foe, at least not inherently. What’s far more likely is that, like the dragons of Berk, AI is how we treat and train it. It has spewed hatred only when it has been taught hatred, and created art when it has been taught art. It stands to reason that, with the right training, AI can be a force for good.
As much as How to Train Your Dragon, in its title and substance, seems to be about training dragons, it’s also about training people—about getting people to see beyond their initial impressions, to overcome the fear of the unknown, and to work cooperatively with powers perhaps greater than our own. In terms of AI, a similar attitude is equally, if not more important, particularly when it concerns retaining control of the great power AI represents—when it concerns our privacy, security protections, and ethical behavior.
To “learn,” so to speak, AI needs to be fed data. Once solely the domain of computer scientists, tools like Microsoft’s Cognitive Toolkit are democratizing AI, making it possible for regular people to develop their own artificially intelligent programs and applications with the power of a laptop.
While such democratization is no doubt exciting, it means training humans— ourselves—to use these powerful tools legally. We need to set limitations, particularly around privacy. For instance, because of the vast amounts of data it takes to train AI systems, it’s almost inevitable for these programs to run up against the strict limitations that GDPR and other privacy regulations place on using personal data.
There are also a wide range of ethical concerns when it comes to utilizing AI, particularly as it relates to privacy, surveillance, and the potential for misuse and discrimination. For instance, a number of companies have begun using facial recognition technology to scan interviewees faces during job interviews. While proponents of the tech have praised it for allowing employers to interview a greater number of applicants, and to move beyond CV-related constraints in the hiring process, others have warned that the tech stands to reinforce existing bias, and to weed out candidates who don’t fit a narrow definition of who would likely excel in a given position. In law enforcement, others have expressed concern over using AI to flag individuals as “criminals” before they actually commit a crime—an application that poses a serious potential for discrimination.
What seems most important—for privacy, ethics, and for people and society—is training, both of ourselves and our AI. One hopes that AI institutes popping up at universities like MIT will tackle not only technology but ethics and data protection and institutions will look to design thinking and operationalize AI ethics. The EU advocates a human-centric approach, one that
…strives to ensure that human values are central to the way in which AI systems are developed, deployed, used and monitored, but ensuring respect for fundamental rights…all of which are united by reference to a common foundation rooted in respect for human dignity, in which the human being enjoys a unique and inalienable moral status. This also entails consideration of the natural environment and of other living beings that are part of the human ecosystem, as well as a sustainable approach enabling the flourishing of future generations to come.
At the end of the day, AI has the potential to be an incredibly powerful tool. With the right training—of our tools and ourselves—it can even be a powerful tool for good.