top of page

The Privacy Field Needs More Diversity

1067590873-Diversity

Before Kamala Harris became the first woman of color nominated for Vice President of the United States, she was also one of the few Black, Asian-American female attorneys focused on privacy issues. In 2013, as California’s attorney general, she sponsored, and California enacted, a law requiring tech companies to post in privacy policies whether they abide by do-not-track requests and what personally identifiable information they collect from users. Two years later, in 2015, she was influential in the banning of “revenge” porn – posting explicit images or videos on the Internet without that person’s permission. More recently, she and other senators urged the inclusion of the Critical Health Care Privacy Bill in the Federal COVID-19 Relief Package.

Harris’ current stature in the public eye highlights the continued need to work towards gender, racial and ethnic diversity and representation in the legal and privacy professions. I’ve written previously about recognizing and advocating for women and the opportunities to improve diversity in the cybersecurity profession.

We can learn from and build on work done by the Minority Corporate Counsel Association (MCCA) and the American Bar Association (ABA). In 2015, the MCCA founded the Black General Counsel 2025 Initiative to track and promote Black legal leaders in Fortune 1000 companies. In the subsequent four years they managed to raise the percentage of Black general counsel from 3.8 % to 5.3%. They have done so in part by making conscious efforts to develop the pipeline of Black legal talent, preparing junior Black attorneys for leadership roles, and raising awareness of corporate diversity issues. Two ABA programs of note are the Coalition on Racial and Ethnic Justice (COREJ) which “examines issues stemming from the intersection of race and ethnicity with the legal system” and its racial justice advocacy and support of “issues addressing bias, racism and prejudice in the justice system and society.”

The tracking that has been reported for diversity in the privacy field is largely around gender representation. Although 50 % of privacy professionals are women, that is in direct contrast to the gender imbalances that exist in many privacy, security and other tech fields. According to the 2017 Global Information Security Workforce Study: Women in Cybersecurity, only 11% of cyber security positions are held by women. Perhaps that is in part because the field has not been hospitable to women. The report also reveals hefty discrimination; 51% of women report cyber security workforce discrimination, while 87% of women report unconscious discrimination. Furthermore, 54% of women report unexplained delays or denial in career advancement. 22% of women also experience “tokenism” in their cyber security roles.

What makes these statistics about gender and race particularly pertinent is the fact that privacy is a vigorously growing field in need of talent. It’s predicted that by 2021, the field of cyber security, which is allied with privacy work, will have 3.5 million unfilled jobs. In the next 20 years, job opportunities in privacy work promise to be plentiful and rewarding. Hiring on LinkedIn for jobs with titles such as “chief privacy officer,,” “Privacy officer” or “data protection officer” increased 77 % from 2016-2019, according to an analysis that LinkedIn conducted for Axios. Companies around the world are finding it challenging to recruit experienced privacy professionals with relevant technical, legal and engineering skills. According to the International Association of Privacy Professionals, (IAPP), the largest and most comprehensive global information privacy community and resource, “privacy is now a necessity of doing business.”

Why is it important that professional privacy positions be adequately represented by women and especially by people of color? In addition to the general principle that greater diversity in the workplace expands the variety of perspective and experience necessary to solve complex problems and generally improve performance, two examples make the case especially vivid.


Facial recognition software


We now have the technology in place to identify a person from a digital image or video frame, usually by comparing individual facial features, including skin textures, to a large data base. When surveillance monitors are installed in public places, the recording is done passively, without individual permission. Even worse, it’s become fairly well established that false-positive rates for this kind of one-to-one matching is higher for Asians, African Americans, and Native Americans. Another way of using the facial recognition software—matching one to many to determine if an individual has any matches in a database — is most likely to generate a false positive for African-American women.

Among other dangers, such false positives have all sorts of skewed ramifications when facial recognition software is used to identify a person who committed a crime. Many U.S. legislators have proposed several bills to limit or control its use. As Senator Ed Markey, the Democratic Senator from Massachusetts has said,

“Facial recognition technology doesn’t just pose a grave threat to our privacy, it physically endangers Black Americans and other minority populations in our country.”

Bias in data analysis algorithms


Our lives and our society are increasingly impacted by automated analysis of vast amounts of data. Machines “learn” to analyze according to the humans who create and use that data, often by scrubbing it from the Internet. For example, if enough data points exist that associate the words “male” or “man” with the phrase “privacy professional,” then the algorithm will conclude that privacy professionals are male. As you might imagine, the negative implicit and explicit bias that many of us hold—often unwittingly– towards women and people of color “teach” the algorithm to respond with similar bias. A diverse group of privacy professionals can play a role in reversing the kind of bias that leads to racial discrimination.

“We need to look beyond processing our data to render it “clean” and “workable” — just because it is clean, doesn’t mean that it is not biased and potentially harmful. We need to question the classification or segmentation in the databases that we mine. There are a lot of opportunities and educational resources on how to generate and apply datasets that have, to the best of our human abilities, the least amount of bias possible. We have to approach all things data with a microscopic lens of data privacy and a great depth of social awareness. Understanding the power and equity dynamics between the data subjects themselves, the data we are collecting, and analyzing will make us better data professionals.”
50 views0 comments

Comments


bottom of page