You shouldn’t have to be an expert to understand what happens to your data.
And yet, data-driven technologies have expanded so quickly over the past 20 years that many – perhaps most – people struggle to understand what it means for them. What kinds of data they generate as they go through everyday life, who is collecting that information, and how that data is being used: to manipulate their viewpoints, to recommend or deny them opportunities, and to influence nearly every aspect of their lives in work, school, health care, insurance, finances, in their personal relationships, and even to shape the society – the culture, politics, public health, and civil discourse – that they participate in every day.
Growing concerns around data privacy: DNA testing, social media, remote work, and beyond
This January 28, nations throughout the world marked the sixteenth annual Data Privacy Day. The occasion was first commemorated by the Council of Europe in 2007 in response to concerns about the privacy-eroding impacts of automated data processing. In the sixteen years since, those concerns have, if anything, grown – and it’s become harder than ever for an average person to understand what’s happening to their data, and why they should care.
Part of the challenge is that, all too often, the ways that personal data impacts individuals and society are hard for the average user to see. Not many users realize all the risks associated with participating in modern online life; and even when they do, many don’t know what practical steps they might be able to take in order to achieve a balance between privacy, convenience, entertainment, and utility that will be more in line with their personal level of comfort and concerns.
Despite this murky sense that it’s hard to know exactly what’s going on underneath the algorithmic hood, we all intuitively sense that data-driven technologies are reshaping nearly every aspect of everyday life. It’s no wonder, then, that so many internet users sympathize with sentiments like the one of a former tech CEO who wrote that,
“Privacy is dead, and social media holds the smoking gun.”
This widespread feeling that people don’t trust big tech with their privacy isn’t limited to particular platforms or technologies. According to a 2020 recent survey, three-quarters of adults believe they have little control over personal information collected about them; 86% of adults are very concerned with how free online services use their data; and nearly seven out of ten are very concerned about the security of online shopping and the privacy and security of devices like smartphones and tablets, fitness trackers, smart speakers, and other internet-enabled devices. And with good reason.
Not long after internet use became ubiquitous in commercial and social life for everyday people, examples of online harms began to mount.
A quick recap of the past twenty years helps put the challenge in context. In 2003, the human genome was mapped, paving the way for the inexpensive DNA testing that’s being used for everything from mapping family trees to medical diagnoses and assessing ethnicity to matching dogs with errant poop that their owners have failed to clear from community-owned land – all raising privacy concerns along the way.
In 2004, Facebook was launched, evolving in less than twenty years to a platform used by more than three billion people worldwide and underpinning a tech business empire of social media and chat messaging functions that have reshaped small business advertising, the news industry, and content promotion, and – for users of WhatsApp – formed the backbone of communications in countries around the world. In 2007, the first iPhone was released, leading to a digital environment in which today more than 300 million users in the United States and 6 billion subscribers worldwide carry palm-sized, and staggeringly powerful, computers with them at nearly all times of day.
In the years since then, the number and variety of smart devices in our homes have expanded to include not just tvs, but video doorbells and security systems, smart toilets, stoves and thermostats, childrens’ toys and nanny-cams, and voice-activated digital assistants that serve up music, jokes, and weather reports, and listen attentively for our every command. Schools have been transformed with digital assistants in the classroom, facial recognition cameras in classrooms and in school hallways, video-enabled proctoring software for remote test-taking, and – of course – through the pandemic, online classes on zoom.
The workplace has been similarly transformed. Employers expect 24/7 email responsiveness, and can track their staff’s location and activity on work-issued smart-phones; employee ID badges are RFID-enabled, tracking a user’s location in the workplace in minute detail; video surveillance monitors against theft; and keystroke logging and web cams attempt to make sure that personnel working from home aren’t wasting company time. And information about us is being used on the job with an ever-expanding set of consequences.
Artificial intelligence algorithms trained on personal data are continuing to reveal disturbing civil rights implications in everything from job applicant screening to ride-share services and engaging in real estate transactions. Even our interpersonal relationships are changing, as “spouseware,” revenge porn, cyberstalking and sextortion, and cyberbullying allow deeply malicious use of technology to harass, threaten, and intimidate current and former spouses, partners, and love interests. Society as a whole is bearing the brunt of this and we needn’t look any further than collective impact of the viral spread of election-related hoaxes, COVID disinformation, extremism, and conspiracy theories of all kinds.
All of these risks and harms are made possible by a few common threads: the rise of cheap data storage, the steady growth of computer processing capacity, and the seemingly-unlimited troves of personal information that are created, collected, processed, analyzed, and used by the myriad platforms, devices, systems, and apps that each of us interact with in our daily lives.
The (marginal) impact of data privacy regulations
In an early attempt to stave off some of the pitfalls of the internet, the first data breach notification law was passed by California in 2003, requiring holders of certain kinds of information – Social Security number, payment card details, and the like – to notify individuals if their protected information was accessed by someone unauthorized, or was otherwise compromised. At same time, federal law was imposing data privacy and security obligations in particular contexts, such as in the healthcare and financial services sectors.
By 2018, every major jurisdiction in the U.S. had a data breach notification law, and similar data breach obligations had also been incorporated into the General Data Protection Regulation (GDPR) that applied directly throughout the European Union and set a benchmark that influenced privacy and data protection laws around the world. And, jurisdictions such as California and the EU enacted privacy laws that required more detailed information to be provided as part of obtaining user consent for the collection and processing of certain kinds of information.
Online harms are hardly limited to data breach, however, and the rise of new privacy laws has done little to assuage the concerns that people have about how their information will be used.
According to the Internet Crimes Complaint Center, or IC3, unit of the Federal Bureau of Investigation, over 100,000 senior citizens in the U.S. fell victim to internet-enabled scams and crimes that resulted in over $1 billion in harm in 2020 alone.
Despite lawmakers’ and security professionals’ attempts to keep current, the nature of online threats continued to grow. The period between 2015 and 2022 saw unprecedented growth in cybersecurity incidents like ransomware,business email compromise, and cyber-induced wire fraud schemes that don’t necessarily compromise personal information, but that wreak considerable havoc on the lives and fortunes of individuals and corporations. The use of stalkerware, usually used illegally to track and monitor the activities and locations of current or former intimate partners, has grown worldwide and presents an ever-growing challenge in domestic violence cases as well as in divorce and custody proceedings.
Some types of online harms have become so commonplace that we’ve almost become inured to them, with companies arguing in court that they shouldn’t be held liable for data breaches because they’ve become so common that no one can be sure that any identity theft or other harm results from this breach instead of previous ones.
And 2016 marked a watershed year in the expansion of online disinformation and malign influence campaigns, as Russian intelligence services were accused, and their private sector collaborators criminally charged, with using personal information and cyber means to attempt to interfere with democratic elections around the world. At the same, social media giants like Facebook – now with some three billion users, or half the population of the globe – were charged with siphoning off information for sharing with political consulting firms like Cambridge Analytica, and allowing conspiracy theories, extremism, and other harmful content to spread like viral wildfire.
As noted in legislative hearings in the United States, the United Kingdom, and elsewhere in the world, at best, these platforms seem to have turned a blind eye to the spread of misinformation and disinformation online, and at worst, taking cynical advantage of the opportunity to increase corporate profits through the individually-tailored advertising that caused company revenue to soar even while anti-vaxxers contributed to the public health crisis of a global pandemic by insisting that the baseless information they’d seen online – rumors that covid vaccines would cause infertility, or cause magnets to stick to a person’s body, or serve as a vector for injecting 5G wireless signal nanochips – were true.
All of these harms depend on the collection of vast amounts of personal information from unsuspecting users of digital platforms, devices, and services. All of them also depend on unscrupulous actors using that information, and the detailed personal profiles that result, for microtargeted messaging to manipulate, influence, scam, cheat, or disadvantage individuals in some way – often in ways that aren’t apparent to the individual victim, or to society as a whole, until after the fact, after the harm has been done.
Against this backdrop of the personal, financial, and societal harm caused by malicious use of personal information, the privacy and protection of personal data has never been a more urgent or important task.
Subscribe for monthly updates
Communicating the consequences of digital citizenship
Although it’s true that technology is creating new pressure on personal boundaries nearly every day, we have an opportunity to keep the privacy glass half-full.
We live in a time when legislatures around the world are passing privacy and data protection laws, when judges are approving lawsuits addressing invasions of privacy through use of personal data, and when everyday people of all ages and around the world are taking increasingly proactive approaches to trying to understand how information about them is being generated, collected, and used.
Privacy isn’t dead yet. But there’s no denying that the contours of everyday privacy and data use are constantly changing in ways that many people don’t see. And much like effective cybersecurity requires approaches that deal with people, processes, and technology, any effort to mitigate individual and societal privacy risks requires more than just laws and regulations.
It also requires a concerted public awareness campaign to help ordinary people around the world better understand the risks and consequences of digital citizenship: of what can go wrong with online interactions and data-intensive technologies; of how to understand the privacy risks and make informed choices about various platforms, app, devices, and services; and how to take sensible precautions, consistent with their own personal comfort level based on a well-informed understanding of the risks, in how they choose to use, or opt out of, various categories of data-intensive living.
This is a perfect time for the government, the private sector, academia, and not-for-profit organizations to launch a concerted and holistic public education and awareness campaign. It could go a long way in mitigating the downsides of data-driven tech, making all of us better prepared to make privacy choices that we’re individually at ease with, and making society more resilient to the swirl of online bias, misinformation, cyber crime, and other ills that arise at the intersection of data privacy, security, and everyday life online.
While government agencies have produced a great deal of content from cybersecurity warnings to brochures on identity theft, these materials aren’t always easily digestible, and individuals have to know they exist and seek them out.
A more proactive, and perhaps more effective, approach would be for educators, privacy advocates, government regulators, and others to focus on creating and delivering a modern and updated version of the classic public service announcement, with a series of holistic, entertaining, accessible, and comprehensive content and programs for raising awareness and educating the public at large about the privacy, data security, social, and cultural hazards of data-driven technologies. The hazards of online life are wide-ranging, and so the topics addressed by digital literacy programs should be as well, helping people better understand how to protect themselves online, and how to avoid taking actions – even inadvertently! – that could harm others online. In order to do this, the subjects covered should include, at a minimum, helping people understand:
- Data breaches, identity fraud, and identity theft, including
- Password management practices,
- The role of social engineering in online scams, and
- Other common scams for gathering personal identity and financial information;
- Core concepts relating to the business model of online platforms, such as:
- “If you’re not paying for the product, you are the product”;
- Platform revenue is directly correlated with time spent online;
- “If it enrages, it engages”;
- The way that features like automatic scroll are designed to keep eyeballs on screen; and more.
They should also include helping people understand:
- The ways in which personal information is used to support targeted advertising;
- How to assess the likely security of an online shopping platform or site;
- How to understand what privacy settings they have enabled in commonly used social media platforms, apps, and devices;
- How to assess the likely credibility of information online and identify – and avoid spreading – misinformation and disinformation;
- The roles of data brokers and data aggregation, online behavioral profiles, sharing and sale of personal information, and targeted advertising;
- The difference between first-party and third-party cookies and how they gather information and where that data is likely destined to go;
- How ad blockers, cookie settings, and other browser settings can give users an added measure of control for the privacy-related issues they care about;
- The difference between the considerable data collection that happens in “private” or “incognito” mode on most browsers and how to use browsers and search engines that offer greater degrees of privacy protection and anonymity;
- How to identify secure and insecure websites;
- The myriad ways location information is collected, used, and shared, including a high level understanding of the steps that can be taken to limit that collection – as well as the mobile device location information collection that individual users can’t control;
- The flaws, fallibility, and biases that are so often present in artificial intelligence algorithms;
- The risks associated with biometric information collection;
- The impact on relatives as well as one’s own privacy of DNA testing;
- The ways in which still images can be manipulated into deep fakes;
- The impact of cyberstalking;
- The ways in which digital citizenship ought to include respect and concern for the privacy rights of others and the well-being of society;
- The ways in which micro-targeting is used to influence political and social opinions;
- The types of cognitive bias and critical thinking habits that make all of us susceptible to scams, phishing attacks, and online disinformation; and more.
As important as the topics are, it’s perhaps equally important to consider how the information will be delivered, and to develop different public education and awareness campaigns that are tailored to different audiences and via different mediums.
For example, in the wake of the 2016 U.S. presidential election, researchers examining online behavior concluded that senior citizens were more likely to be duped by online fake news, with Americans over the age of 65 sharing nearly seven times as many articles from fake news domains as younger internet users.
There’s clearly an urgent need to reach out to older Americans via the means they’re most likely to trust, such as traditional television public service announcements (PSAs) that can be broadcast during local television news shows and during prime time network and cable television programming. Digital literacy campaigns are already a feature of many public libraries and often include content for seniors as well as other age groups; those efforts can be expanded with further funding and support. Educational efforts targeted towards senior citizens can also be produced as modules – like the beginner’s guide to mobile technologies produced for senior citizens in Belarus – and made available through senior centers, elder care and other long-term care facilities, and through faith communities.
In addition to helping raise the overall level of digital literacy, public education and awareness campaigns for all ages and demographics could really benefit from conversations about how to talk to people of differing views and in different groups about issues relating to fake news, conspiracy theories, and other harmful online content.
Articles on “How to talk politics with your relatives at Thanksgiving” have become an annual tradition, with an increasing number of stories addressing “How to talk to your grandparents about fake news.” Part of any public awareness campaign should be practical tips and skills for talking with friends and loved ones who’ve fallen victim to online misinformation, or who have started believing in baseless conspiracy theories that are spreading online.
There’s also a need for more studies in this area, as researchers are still working to understand just how people are impacted by misinformation in particular contexts, such as in healthcare, and how best to counteract that inaccurate or deliberately false messaging.
For younger age groups, it’s easy to envision a Schoolhouse Rock for Digital Literacy or Online Life. Generations of American children grew up learning the basics of civics from short musical videos like “I’m Just a Bill,” grammar from cartoons like “Conjunction Junction,” and basic math functions from titles like “Figure Eight.” Their power lies in their tight focus, catchy tunes, memorable lyrics, and charming animation. A similar approach for a new era – a Schoolhouse Rock for digital citizenship could make a tremendous contribution in making accessible and memorable information that – like government operations or grammatical structure – strikes many people as dry or tedious or hard to recall, until presented in ways that are not just digestible, but delightful.
This kind of content could air on tv, be played on streaming services during previews and ad breaks, could circulate on YouTube, or appear in news feeds on social media – in other words, it could arise everywhere that online users, of all ages, are found.
And of course for children enrolled in K-12 and post-secondary education, digital citizenship and online literacy should be incorporated into regular curriculum at all ages. Efforts to help school children spot deep fakes and fake news have a track record of success in Finland, whose efforts could be a model for similar approaches elsewhere in the world.
These programs should also consider how to address digital literacy and disinformation differently across platforms. According to one study, many platforms serve as a petri dish for the global growth of online conspiracy theories, while one major platform (Twitter) helps reduce them. The ways that each major platform contributes to virality should inform how those platforms’ inherent features can be used to support digital literacy and to counter disinformation. And organizations – government entities and non-profit foundations – that provide funding for the arts should create grant opportunities for writers, musicians, playwrights and performance artists, graphic novelists, illustrators, and others to create content that can deliver digital literacy messaging in a whole range of venues, mediums, and context
Despite the galloping pace of data-driven technologies, Data Privacy Day gives us an opportunity to reflect on practical steps that the global community can take to harness the best opportunities – convenience, entertainment, innovation, education – from those technologies, while making a concerted investment of time, energy, and resources in efforts like public awareness and education campaigns that can help ensure that Data Privacy Day will still be meaningful in another sixteen years from now.