top of page
Jayalakshmi Sankar

Treading the Tightrope between Migrant Rights and Digital Border Technologies

Updated: Sep 17


Migrants moving from one country to another
Migrant Rights

Introduction


In September 2023, the Office of United Nations High Commissioner for Human Rights (“OHCHR”) released a report on the human rights perils of the digital border surveillance technologies adopted by states. This report is not the first-time international organisations, civil society groups and scholars have expressed concerns about the use of certain surveillance technologies at the borders of states and the incredibly adverse impact it has on the lives of migrants and refugees. 

International law is based on the principle of mutual respect for the sovereignty and territorial integrity of all States. A fundamental aspect of the same is the right of all States to secure their borders. In Nishimura Ekiu v. United States, the US Supreme Court upheld this sentiment by observing that, innate to the sovereignty of a State is its power to forbid the entrance of foreigners within its territory in all cases it deems fit. Taking this further, there has been a steep increase in the securitization and militarization of international borders in the recent past. The budget allocation for immigration enforcement and border control has been an upward slope in most countries, especially the West. For instance, European Union’s (“EU”) border control agency Frontex is presently the most funded EU Agency. Similarly, in the 2024 Budget, US Immigration and Customs Enforcement department was allocated $ 25 billion which is higher than ever before


This increased funding is largely being used to develop and employ various kinds of digital border technologies. Through this blog article, the author intends to first, expound upon the various kinds of digital border technologies employed; second, critically examine the adverse impact of these border control technologies employed by States through the lens of international humanitarian law and human rights law; and third, discuss the way forward and safeguards required when utilizing such technologies.


Digital Border Technologies Employed and the Question of Data


The term “digital border technologies” has been used by the OHCHR as an umbrella term to refer to all state-employed devices used to enforce border control. A UN Special Rapporteur noted that these digital border technologies are increasingly relying upon surveillance, big data, automated decision making, predictive analyses and other algorithmic systems. While the term is not confined to artificial intelligence (“AI”) only, the prevalence and reliance on AI is escalating at the borders. Although aimed at increasing efficiency, there are legal and ethical problems with the mandatory use of these technologies.


For instance, facial recognition systems use algorithmic techniques to identify facial features and convert them into mathematical templates to compare individuals’ faces with that stored in its database. They are usually deployed to confirm and store an individual’s official identity at the border and verify their documentation or to check for any matches with “watch lists” of wanted individuals. AI is trained on existing data and as a result, reflects human biases and stereotypes perpetuating and exacerbating racial inequalities. The FBI’s facial recognition technology used for comparing suspects with mugshots and drivers’ license images had a very high rate of misidentifying black people with false positives. In Israel, an experimental facial recognition technology called Red Wolf is being used by the government to scan Palestinians’ faces without their consent and store them in surveillance databases. This data has been used to deny entry of Palestinians into occupied territory and even detain them. 


Another instance is iBorderCtrl of the EU, which is an AI lie detector system for Schengen border management. It analyses micro facial expressions and non-verbal behaviour of travellers and migrants to detect wrong answers. Parallelly, it collects information from social media accounts and law enforcement databases to assign them a risk score. Lie detectors are notorious for misconstruing stress or nervousness as lies. This AI makes assumptions about an individual criminal potential and is a direct violation of the right to a fair trial and audi alteram partem. Additionally, it contravenes the principles of EU’s own Charter of Fundamental Rights such as the right to dignity, private life and communications and protection of personal data. For these reasons, lie detectors are not permissive as evidence in criminal trials of some States given their unreliability. 


A further instance is the GPS-tagging of migrants. The UK Home Office has employed the practice of fitting GPS ankle tags or GPS-enabled fingerprint scanners for asylum and immigrant applicants to prevent them from absconding or breaching their bail conditions. Not only is this practice inherently degrading and dehumanising to migrants and asylum seekers, it also generates a mammoth of trail data of intimate details of an individual’s places of residence, inter-personal relationships, religious allegiance and political views, which the Home Office is authorised to access for an indefinite period. This grossly violates the migrants’ right to privacy, data protection and even freedom of movement and association. The AI element was introduced in the form of GPS smartwatches with facial recognition technology as an alternative to the invasive ankle tag. However, in addition to the existing concerns, this only adds the additional element of discriminating and misidentifying non-white persons. 


Unfortunately, these examples only constitute the tip of the iceberg of the pervasiveness of states into the personal lives of migrants and asylum seekers. The pattern among all these technologies is first, an unfettered discretion on personal data collection without consent; second, utilizing this data to incriminate the migrants and asylum seekers by using it as evidence to reject their applications or detain them; third, a lack of complete accuracy in the technologies which often reinforces racial stereotypes and targets non-white persons; and fourth, a lack of accountability and transparency among states with no mechanism for scrutiny and oversight. 


I.H.L. and I.H.R.L. Critique


Under International Humanitarian Law (“IHL”) and International Human Rights Law (“IHRL”), migrants and asylum seekers are to be treated as civilians and States have a positive obligation to actively fulfil their human rights such as equality and non-discrimination. Instead, the focus of heightened employment of digital border technologies has been on preventing migrants from entering a State and distancing themselves from responsibility, rather than assisting them in reaching safe harbours. For instance, the EU, which is the pioneer of data protection through the General Data Protection Regulation (“GDPR”), in its AI Act has exempted migration and asylum from any protection granted to high-risk uses of AI. 


Diversification in means and methods of collecting migrant and asylum seekers’ data has diluted the concept of firewalling of information, such that immigration authorities now access data from other state agencies and private entities such as real estate agents, banks, and even social media applications. They avoid utilizing fundamental state services such as healthcare, education, and free legal aid out of fear that law enforcement officials would obtain their data and utilize it to possibly remove them from the State, resulting in a chilling effect.


There exists an inherent power dynamic between a state and a migrant who wishes to seek the citizenship or asylum of that state. This power dynamic shifts the onus of safeguarding the rights of migrants and asylum seekers on to the state. However, the problem lies in the fact that States have a strong case to continue employing the aforementioned algorithmic tools and digital border technologies. 


  1. Right to Secrecy


An inalienable element of the sovereignty of States is the power to protect its borders, especially since illegal immigration is a concern that directly affects the safety of the state and citizens. It may be argued that the right to information about the functioning of the government has attained the status of customary international law, but this right can be reasonably restricted on the grounds of national security which States often cite as a justification for the lack of transparency on the working of technologies employed at the borders. 

The requirements for transparency and accountability have heightened since the development of these technologies is increasingly being outsourced to private entities. The involvement of private entities strengthens the need for transparency given the implicit commercial interests of the entities. A 2017 UN Working Group on the use of mercenaries drew attention to this growing commodification of immigration and border management. Border management is a sovereign function of a State tied to protection of civilians under IHL and not the profit-making activity that it has been evolving into. Frontex of EU is at the forefront of this practice and routinely invites industry representatives to pitch technologies for EU border control. Household names such as Accenture, Microsoft, IBM, and Huawei are involved in border control of various States. The Guiding Principles on Business and Human Rights classify outsourcing of immigration control activities as “high risk” and imposes a greater responsibility on States to conduct due diligence of the transborder activities of the private entities and take accountability for any human rights abuses. Therefore, States cannot perpetually hide behind a smokescreen of national security and confidentiality to justify their opacity on digital border technologies given the various actors involved in its functioning. 


2. Lotus Principle


The Lotus principle, laid down by the Permanent Court of International Justice (“PCIJ”) in the Lotus Case (France v. Turkey), essentially states that “Restrictions on the independence of States cannot be presumed.” What this means is that anything that is not expressly prohibited in international law is permissible. In the absence of a prohibition, States are free to act as they please. This principle was reiterated by the International Court of Justice (“ICJ”) in the Nuclear Weapons Advisory Opinion wherein the absence of a specific prohibition against the use of nuclear weapons was a factor considered in assessing its permissibility. Similarly, States could argue that there exists no prohibition against the use of digital border technologies and therefore, as per the Lotus Principle, they are free to employ them. 

Notably, the Lotus principle has been criticised by various scholars as a means to avoid accountability and remain ignorant of the negative implications of their actions. This sentiment has been expressed in the Nuclear Weapons Advisory Opinion where Justice Weeramantry opined that the Lotus principle casts a baneful spell on the progress of international law. Additionally, even if one were to accept the application of the Lotus principle prima facie, the use of digital border technologies still raises concerns through the application of the Martens Clause. It states that in the absence of a specific framework to govern a matter in international law, it shall be governed by the “laws of humanity” and the “dictates of public conscience”. Going by these principles, the rights of migrants and asylum seekers are blatantly transgressed through the use of the aforementioned digital border technologies. The laws of humanity and dictates of public conscience would require that States comply with their human rights obligations provided in treaties such as UDHR, ICCPR, ICESR, ECHR etc. and even customary international law and ensure the rights of migrants and asylum seekers are not compromised in favour of technological advancements. 


The Way Forward


The unquenchable urge to track, target and analyse the data of migrants and asylum seekers has made being a migrant more dangerous than ever before. The advancement of technology and use of AI is inevitable and necessary. The problem lies in that there is no proper safeguard and protection for migrants and asylum seekers in a specific framework. Until that exists, a moratorium on digital border technologies that utilize surveillance systems has been recommended. 

The OHCHR also suggests that States scrutinize each digital border technology already deployed through the three-part test of legality, necessity and proportionality. Further, for the digital border technologies that are proposed to be deployed, States should conduct a human rights impact assessment prior to their deployment. 


Ultimately, mere criticism of existing digital border technologies is inadequate; States ought to go a step forward and foster the development of data management technologies that proactively uphold the rights of migrants and asylum seekers and make the process of leaving their home state safer. 


Author:


Jayalakshmi Sankar is a Fourth Year Law Student at National Law Institute University, Bhopal.


Comments


bottom of page