top of page

Data Privacy during Conflict: Part II

Harmful uses of data during violent conflict and political unrest



Data Privacy during Conflict: Part II


High quality data and information are vital to making good decisions. As such, it shouldn’t come as a surprise that data processing activities of all sorts are key to any military, strategic or political operation during a conflict.


As you can also probably imagine, the scope of different types of data that come into use for conflict-related purposes is vast and, of course, includes many kinds that fall within the definition of ‘personal data’, i.e. data that relates to a real person and can be used to directly or indirectly identify them. Despite this, much of this personal data does not fall within the scope of data privacy regulations and human rights laws due to the nature of how and why it’s collected and by whom, or it might be exempt from such rules due to national security or public health exceptions. Moreover, many countries haven’t yet passed comprehensive data privacy laws, and even though much of the human rights framework (including provisions related to the right to privacy) applies universally, it isn’t always enforced and safeguarded effectively, specifically in countries going through violent conflict and political unrest.


As a result, (personal) data and data-driven technologies are used as sources of information, tools for decision-making, means of defence, and weapons of war. In many ways, however, these uses violate the right to data privacy, as well as other related digital rights. While data privacy by far isn’t the only or the most significant right to be disregarded during a conflict, its violation holds alarming implications for affected people, both during and beyond the conflict at hand. This insight will shed some light on the detrimental uses of personal and sensitive data during conflicts.


Surveillance


An increase in the use of surveillance technology and data collection for surveillance purposes is common in conflict-affected countries. During political unrest and violent conflict all alarm levels are heightened. Where countries invoke martial law or a state of emergency, intrusive surveillance activities may, in fact, become officially legal for the time being. It makes sense that during a conflict all involved parties feel an increased need for keeping an eye on what’s going on in their immediate surroundings, as well as on what enemy groups and other adversaries are up to. However, increased surveillance to detect suspicious activity almost always necessarily involves the collecting of data on civilians and the monitoring of their behaviour and/or activities. These intrusive uses of data pose significant challenges to individuals’ data privacy.


For example, upon Russia’s invasion of Ukraine in February 2022 and the resulting armed conflict, the infamous surveillance company Clearview offered its services to the Ukrainian government. Clearview AI uses facial recognition algorithms, a form of artificial intelligence, to collect and run through all publicly available photos on the internet to store them in their database. Access to this database can be bought and allows for clients, including law enforcement and military agencies, to run suspects or persons of interests through the system to identify matches with photos in the database. Clearview AI collects its image data indiscriminately and without obtaining individuals’ consent. In theory, this may sound like an appealing data source for surveillance purposes during a time of conflict when time is of the essence and any advantage over the enemy is invaluable. However, at best, this technology and use of data will not only target enemy soldiers, it will also include data on civilians who are already vulnerable because of the war and don’t deserve their rights being further violated. At worst, the database fuelling the system could be exploited or manipulated by (Russian) enemies, resulting in the accidental targeting of civilians or own troops, or flaws in the system’s outcomes.


Increased surveillance can also occur as a result of humanitarian organisations collecting vulnerable populations’ data to deliver vital aid. For example, displaced people and those relying on food aid in conflict-affected regions are often required to share their biometric and personal data with humanitarian organisations’ databases if they want to seek shelter and food. While this helps humanitarian workers to avoid fraud, i.e., preventing people who aren’t in need to exploit the aid deliveries or displacement camps, it causes multiple ethical problems. Besides not giving people a real in choice in whether they want to give away their data when seeking help (which would mean that their consent isn’t freely given), the existence of such sets of highly sensitive data can make people’s lives even more precarious. For example, if such datasets are leaked, manipulated, or stolen by a conflict party or adversary, affected people could face persecution (as discussed below) or have their data altered in a way that could affect their access to help and human rights.


Information security breaches of humanitarian organisations like that happen more often than we may think, with a recent, large scale breach discovered by the International Committee of the Red Cross in January 2022 (which we discuss in the previous part of this insight).


Censorship


While censorship and suppression of free speech pose great problems in many countries around the world, specifically in authoritarian regimes, they are a common occurrence or further increase in intensity during a conflict. Reasons for censoring people’s interactions and shutting down websites, apps or the whole of the internet entirely are:

  • Blocking enemies’ (and civilian populations’) access to communication and information channels

  • Minimising the spread of mis- and disinformation that could inflict more violence or cause panic

  • Curbing free speech of those with opposing opinions, including advocates fighting for human rights, peacebuilders, and journalists

  • Strategically using shutdowns to bargain with other conflict parties or with international actors

  • Preventing evidence on atrocities and humanitarian crises from being shared with the world


In Yemen, for example, the Houthi rebels took over the country’s main internet service provider YemenNet in 2014, and another ISP, AdenNet, in 2018. This allows the groups to censor information going in and out of the region, and even cause largescale internet shutdowns when needed. Houthi rebels use these shutdown for strategic purposes as they control areas covering the majority of the country’s population and, therefore, hold bargaining power over concerned international mediators and humanitarian organisations, as well as other parties to the conflict.


Social media platforms, which serve as the principal information channels in some countries affected by armed conflict, are also often targets for censorship. During controversial elections in the Burundi in 2020, for example, the government silenced people on and around election day by blocking communication channels like Twitter, Facebook, YouTube and Whatsapp. Generally, using long-term internet shutdowns or social media censorship to curb free speech during political unrest or crisis has been a common strategy in many countries during Covid-19 lockdowns.


Persecution & becoming a target


During most conflicts and situations of political unrest, conflict parties (e.g. state governments, opposition groups, rebel groups, military juntas,…) are not only faced with one another but also with the public opinion of their citizens and the activities of human rights advocates, journalists, and organisations supporting and protecting the civil population. In some incidences, a conflict or unrest might even be rooted in a history of marginalised groups being targeted, mistreated, and persecuted by those in political power for belonging to a certain religion, ethnicity, race, gender, or sexuality.


In either case, personal data is often used to specifically identify and target those persons and groups that a conflict party deems as adversaries. The data that comes into play here can have any kind of origin: Some may be obtained through official records and law enforcement activities (such as police records, facial recognition technology, and biometric information like fingerprints). Other types of data are gathered online (e.g. through social media channels) and include details on people’s behaviour, locations and other sensitive information. Government authorities in some countries also enjoy access to private companies’ information on people, if they can prove that their concerns fall within ‘national security’ or ‘public safety’ exceptions.


A specific category of data that has been used to gain valuable information on individuals, is so-called metadata, i.e. the ‘data about the data’. Metadata isn’t usually very well protected under data privacy laws, but can reveal sensitive details about a person, and is therefore constantly under the risk of being stolen. It comes into play in the context of instant messaging apps, social media, online banking, mobile money, and cash transfer programmes (CTP); all of which are increasingly important for people in conflict-affected countries seeking communication with the international community.


As mentioned above, personal data collected by humanitarian organisations, such as fingerprints and iris scans to identify displaced people and those in need of aid, also poses a risk of being used for persecution and targeting if accessed by the wrong people. For example, in Yemen, due to the devastating impact of the ongoing war in their country, 80% of the population is relying on humanitarian aid to survive. In 2019, when the World Food Programme (WFP) decided to introduce biometric scans to avoid fraud, Houthi rebel groups demanded access to the data of the civilians benefitting from food distributions. Out of fear for what the rebel groups would do with it, the WFP did not agree to sharing this kind of sensitive data, and ultimately, disregarded the idea of collecting biometric data altogether.


In 2021, when the Taliban took over control in Afghanistan, they captured US military government devices that contained biometric and other sensitive data of Afghan people who’d supported and fought alongside the previous government and stationed US military personnel. This data could be used to identify individuals who, in the eyes of the Taliban, might be adversaries, and many experts on the conflict have expressed grave concerns at this possibility.


Becoming a target and facing persecution aren’t unique to war times, but they increase in occurrence. This is the case for a combination of different reasons, not the least because otherwise available (human rights and legal) protection or accountability mechanisms aren’t well enough observed or deliberately ignored during a conflict. As a result, many people considered as ‘adversaries’ might be faced with threats, physical and mental harassment, abuse, arbitrary detention, torture, kidnapping, and, in the worst cases, being killed.


Where state institutions themselves target groups or individuals among their citizens, persecution is often underpinned by repressive laws and regulations, which allow for the application of broad criminal provisions that are often invoked under the pretence of ‘national security’ or ‘public safety’ exceptions to otherwise applicable human rights safeguards. As a result, these activities can have a chilling effect on free speech, i.e. deterring people from exercising their rights to freedom of expression and assembly. Often this happens alongside the violation of other fundamental rights, including the right to a fair trial of those who have been arrested for their activities.


While it is true that communication and information channels make it possible and easier for people to access their right to free speech in the first place, the use of (legally or illegally obtained) personal data to target those individuals represents a dangerous example of what is done and what can be done by authorities to crack down on political opponents.


Information wars: Misinformation, Disinformation & Propaganda


If you’ve read the news about ongoing conflicts, you might have come across the term ‘information war’ before. Information warfare has become a crucial part to modern warfare strategies. As the world has become globalised, not only will a country’s own citizens be influenced by what information they receive (or not), but so will the citizens of other countries in the world as well. Here, information is used to confuse, demoralise, mislead, or manipulate other parties to a conflict, affected populations, and members of the international community. In addition to spreading information, ‘information warfare’ also refers to any information gathering, analysis and use for tactical purposes, or denying opponents to do so themselves (e.g. through DoS attacks, or internet shutdowns – see part one of this insight).


While this way of using information and communications technologies for gaining advantages over other conflict parties does not need to include the spread of false information or propaganda, in most cases it does.


The spread of misinformation, disinformation and propaganda in modern times is vastly facilitated by the use of social media, instant messaging and other information and communication channels, which enable rapid, transnational dissemination of information. Many of these channels also don’t normally involve verification or fact-checking of ‘news’ before they go out as is the case for traditional media outlets, such as newspapers and TV/radio broadcasters.


Definition — Misinformation, Disinformation and Propaganda

Misinformation is the spread of false information without malicious intent.

Disinformation is the deliberate dissemination of fake facts.

Propaganda is information to promote ideas or activities, often in the political or military sense, and has often been associated with ‘manipulative’ information, i.e. information which is highly selective, uses loaded language, and/or may be partially exaggerated/false.


Mis/disinformation can be further amplified by AI-powered tools, such as so-called ‘recommender’ algorithms employed on online platforms, which suggest, filter and curate content on people’s feeds based on their previous behaviour and preferences. These algorithms have been proven to create so-called ‘filter bubbles’ and ‘echo-chambers’, which shape and radicalise people’s opinions, or influence their behaviours. Some of these harmful outcomes are merely by-products of the existence of AI-powered tools online. Others, however, result from the deliberate exploitation of the way in which algorithms work. For example, in recent years there’s been a rise in so-called ‘cyber troops’, which are teams of individuals who use online platforms, including social media networks, to manipulate public opinion, including in the context of violent conflict and political unrest, and who make use of this algorithmic ‘architecture’ on which platforms are built


Bots and fake accounts are other ways through which mis/disinformation is spread online, specifically on social media. They are often powered by AI systems, such as algorithms that can create, post, and share information without the need of repeated human input, and are particularly useful to rapidly spread information. Employing a large range of such bot networks is common practice, for example, as part of Russian state-sponsored disinformation campaigns. In February/March 2022, disinformation monitoring platforms and social media networks themselves discovered a steep increase in the use of fake accounts that spread Russian propaganda and negative content, in particular those including the hashtag #istandwithrussia.


In recent years, there’s also been a rise in so-called deepfake technology, i.e. the use of artificial intelligence to generate audio-visual data that resembles humans so closely it’s increasingly difficult for people to verify its authenticity. Deepfakes can be used to generate visual and/or audio material that fakes incidences, speeches, and other evidence, which can, in turn, help to justify a conflict party’s own activities, incite their supports, or confuse others involved in the conflict. For example, in March 2022 during Russia’s invasion in Ukraine, footage of deepfake videos turned up, which depicted Ukraine’s president addressing the Ukrainian citizens and telling them to discontinue fighting. While the video did go viral online, it was quickly identified as a deepfake. Experts are concerned at this development and have pointed out that the rapid rise in deepfake technology (and the perfection thereof) will be able to have worrying implications for conflict situations.


What do these data-driven technologies that amplify mis/disinformation have to do with data privacy? AI systems, such as the above-mentioned recommender algorithms, language models, and deepfake technology are trained and fuelled by personal data. Training data (such as text, images, audio material, and videos) for these tools is often scrapped from online sources, including social media data and website content. While people do agree to a certain amount of exposure when putting their data out into the world publicly, and while most of the collected data for training purposes is rendered ‘unidentifiable’, it can hardly be said that all of these activities comply with data privacy law. Additionally, as we discuss in part one of our insight series on Big Data, AI & algorithms, the way in which many AI systems works and what they’re used for simply cannot or does not meet data privacy standards.


The problem of misinformation, disinformation and propaganda online is complex and requires cross-disciplinary approaches to solve. As becomes obvious when looking at relevant examples, this problem may be even more acute and important to figure out in the context of violent conflict and political unrest.


Manipulation of Datasets


While we’ve looked at the ways in which data can be collected (or stolen) and used for malicious purposes, another way of exploiting data is to manipulate and disrupt datasets of governments, humanitarian organisations or international institutions. This could be done by using AI powered malware that can alter information or inject false data into existing datasets. The purpose of such an attack could be to destroy the integrity of datasets and disrupt their information life cycles; to modify content and information used to inform important decisions or to help people in need; or to sow doubts and mistrust in people towards the data that their organisations are using.


For example, GIS data and satellite imagery have become vital sources of information for creating and maintaining situational awareness during conflicts and humanitarian crises. A lot of the tools used for these purposes are powered by artificial intelligence, machine learning and deep learning techniques. The same techniques that identify buildings, obstacles, movements, and signals within this data, however, can also be used to artificially create objects in the data to confuse and mislead those relying on the dataset.


Datasets that have been altered and manipulated by malware are difficult to detect and restore and can result in grave danger for those affected by faulty decisions based on this data. Additionally, the occurrence of false information being injected into reports and datasets of internationally trusted bodies could lead to an alternation in the relationship between truth and evidence.


Conclusion


This article isn’t meant to comment on the usefulness of data in the context of furthering anyone’s gains during violent conflict and political unrest. It’s supposed to highlight some of the many deliberate, incidental, and unintended harms to individuals that are a result of the use of personal data during these times of instability and danger.


While violations of data privacy might not be the first thing people associate with armed conflict, they should by all means receive enough attention to prevent or mitigate unwanted consequences down the line. This insight laid out some of the most prominent ways in which data privacy is impacted by conflict and political unrests, and how such violations can become grow into substantial risks to people’s physical safety.

bottom of page