top of page

Data Ethics for Humans


data ethics

Way back in 2014, when I first started my blog, I wrote about Privacy for Humans – a movement towards human centered use of technology. I expanded on this theme in my 2016 ebook, called Privacy for Humans, which provides tools for mindfully cultivating privacy awareness—tools just as applicable today as they were then.

Well, here we are in the future. It’s 2019, and we’ve seen the good, the bad and the ugly in terms of privacy wins and fails. The bad and ugly came in the form of high profile data breaches, and a general techlash—a backlash against ad tech, social media, and other tech companies that traffic in data. But there is also good. Data privacy and security as topics for discussion, analysis and scrutiny are now more out in the open, so to speak, than perhaps they have ever been. And the attitude about privacy has become less “ignorance is bliss” and more “information is power.” Companies who may have treated privacy as something of an afterthought are having to reprioritize, and that’s a good thing for everyone.

What’s more is that I’m noticing a shift beyond foundational privacy and security protections, towards technology ethics, particularly the ethical use of data. The discussion expands upon how users and companies can practice not just a mindful use of technology, but what influence we may bring to how technology builds ethics into its bones. And this isn’t a conversation that’s happening on the fringes. Whole conferences have sprung up around data ethics, and even some big companies like Salesforce have formed entire departments around the ethical and humane use of technology. But what is data ethics? And why should we care about it?

What is Data Ethics?

According to Brian Patrick Green, director of Technology Ethics at Santa Clara’s Markkula Center for Applied Ethics, technology ethics is the application of ethical thinking to the practical concerns of technology. Swap out the words “technology ethics” with “data ethics,” and you’ll get our working definition of the same. Technology ethics—and, by association, data ethics—falls into two main categories.

First, these schools concerns themselves with the ethics involved developing new technology and new ways to use, manage, and otherwise interface with data. In other words, instead of asking whether we can do something, technology ethics is concerned with whether or not we should—always, never, or depending on the context. For example, in a warming climate, is it ethical to develop tools that are a drain on resources, or exacerbate global warming? See for example, recent reports on AI’s carbon footprint. And while automation in manufacturing might help speed up production, is it ethical, since it may result in fewer workers able to make a living?

And the answers aren’t always clear cut. Some technological innovations fall squarely in the gray area. For instance, the Tor web browser allows individuals to browse the web anonymously and untraceably. While it’s a useful tool for protecting individual privacy, it has also allowed certain people to circumvent the law online. And what are the unintended consequences of facial recognition databases – especially when used by police or authoritarian regimes? And those are just two examples. Lots of technologies can be similarly contextually right or wrong.

Second, technology ethics is interested the ethical questions around the ways technology has made us powerful. Because we didn’t always have the power to edit our own genetic code. We didn’t always have the ability to post our private thoughts for the world to see. And we didn’t always have AI personal assistants. Now we do. Technology ethics asks questions about what we, as individuals and organizations, are to do with that power. And because with great power comes great responsibility (thank you, Voltaire, or Spiderman’s uncle, depending on who you ask), technology ethics is more important than ever.

Ethics by Design

Today, tech moves faster than the legal system, meaning innovations and developments take place a lot faster than regulators can, well, regulate. But technology ethics may be able to fill in some of the gaps not covered by existing laws, regulations, or best practices. Implementing Ethics by Design—adopting ethical obligations in the development of new technologies—can be part of the solution. By essentially building an ethical component into the development process, companies and individuals can wrestle with ethical conundrums in the abstract long before they become real life ethical problems.

One organization, the Omidyar Network, a group of entrepreneurs, investors, innovators, and activists in the tech space, has gone so far as to develop what it calls The Ethical Operating System (Ethical OS). Ethical OS is “a practical framework designed to help makers of tech…anticipate risks or scenarios before they happen…Ethical OS…outlines emerging risks and scenarios…to help teams better future-proof their tech.” Other companies, like Microsoft have implemented a set of guiding principles to make AI safer—principles that include fairness, reliability and safety, inclusiveness, accountability, transparency, and privacy. In a similar spirit, as many as 40 countries have adopted similar principles.

For companies, adopting a policy of Ethics by Design isn’t just the right thing to do, it’s also good for business. Acting ethically is in the best interest of customers—it builds trust and increases value. It can even become a selling point, as it recently has for Apple.

How can companies and privacy professionals, practically, leverage existing privacy and security programs, governance, and stakeholders to include ethical principles? According to Ethical OS, these questions are a good place to start:

* If today’s technology might someday be used in unexpected ways, how can you prepare?

* What risk categories should you pay special attention to now?

* And which design, team, or business model choices can actively safeguard users, communities, society, and organizations from future risk?

It may sound obvious, but most technology isn’t inherently good or bad—that usually comes from how it’s implemented. One helpful concept is using ‘design fiction’ – playing out the worst case, Black Mirror dystopian scenario for how technology may be used, or abused. Asking the hard questions up front and building ethical decision-making into the process can certainly help nudge tech in the right direction.

6 views0 comments
bottom of page