Skip to main content

By Prof. Dr Beth Singler, University of Zurich

Who are you? Who am I? Are we the same people we presented ourselves as just yesterday? Or will we change again by the time we meet tomorrow?

As for me, I am an academic, a mother, a wife, a daughter, a sister, a serious writer of blog posts on identity in the age of AI, and a silly poster of memes on Twitter. “Do I contradict myself? Very well then I contradict myself, (I am large, I contain multitudes.)”, as Walt Whitman wrote in his 1855 poem, Song of Myself, 51. But while poets like Whitman can make such observations about humanity sing in verse, sociologists and anthropologists like me try to come up with new social theories and terminology to explain how and why humans do things.

In the 1950s, socio-linguists sought to describe how and why multi-lingual people might choose to speak one of their languages in a specific social situation and another in a different context – and they called this ‘code-switching’. But in the decades since, code-switching has come to mean all the changes in language, dress, and behaviour we all engage in when we present ourselves differently in different contexts, as this article from the Harvard Business Review explains, pointing out the costs of code-switching for minority groups and individuals. https://hbr.org/2019/11/the-costs-of-codeswitching

And of course, with any ‘IRL’ (in real life) social behaviour, we can also find similar examples in the digital realm – as a digital anthropologist, I argue that people are people to other people whether they are on or offline. So, while on Linkedin you might trumpet your triumphs, you only share your defeats and trials with your closest friends and family on Facebook. On your phone you might have a collection of 90s boy bands, whereas on your public Spotify you have more impressive tracks from classical composers for when online ‘guests’ come around. You might have more than one Twitter account, keeping one as your ‘professional face’ while the other keeps up with your favourite fandom and its disputes, even joining in on the vicious arguments about whether Rey was an overpowered ‘Mary Sue’ in the Star Wars sequels, or not…

Therefore, this kind of code-switching has its own Dark Side; social media that allows for anonymity also allows us to code-switch into personas that can be abusive to others. The owners of these platforms have tried to quick-fix solutions to the tension between the fluidity of human identity and the need to identify users when harm occurs. Some have gone for a responsive approach, with moderators and banning users when they code-switch into someone a little more… dangerous. Others have sort to avoid such risks and cement users’ IDs early on.

Facebook’s insistence on real names is an example of the latter. Both approaches can restrict the fluidity of people’s personas. However, the real names policy has had considerable pushback from ethnic groups whose real names were still banned due to their language or descriptive nature or because they had adopted a cultural name that didn’t match their birth name. For instance, Shane Creepingbear, an Oklahoman member of the Kiowa tribe who was told his name was ‘fake’, or Gabhan Mac A Ghobhainn, who had taken a new Gaelic name that was also refused. But seeking anonymity online can also be a protective or life-saving form of code-switching. Many Queer advocacy groups also made this argument in the face of the real name policy.

With the rollout of more and more AI and ambient intelligence applications in virtual and digital spaces and the increasing use of AI for proving digital identity, some of these unresolved problems will reappear. Despite all the hype about the Metaverse and its ground-breaking new technologies, some things will remain the same. Primarily, the owners of whatever platform dominates the market will still want to know their customers. Even with varying levels of ID and allowances for more fluid identities and self-representations, corporations will still want to relate the behaviours observed by their ambient intelligence systems (including in-Metaverse AI assistants and even the environment itself) back to a specific person IRL. Knowing that IRL person will, of course, also require knowing all their self-representations – but tying all those identities together to a singular host of those multitudes will be financially beneficial, both in terms of selling commercial products and selling any data tracked back to that particular individual.

Knowing the customer also involves gate-keeping their admittance to digital spaces through log-in systems. And another long-standing problem online has been age-related gate-keeping. Could AI be implemented that recognises behaviours and predicts biometrics on that basis? We already have systems that recognise typing patterns (e.g., how we write out our passwords), to identify individuals. Arguably, tying behaviour to identity could be ethically concerning, especially with younger age groups.

In 2019, an academic paper (https://www.forbes.com/sites/noelsharkey/2019/03/02/grassing-on-teenagers-ai-to-snoop-on-pot-smokers/?sh=446d4d2f3adf) tried to lay out arguments on the ethical responsibilities and hierarchies in the following situation: a teenager living in an IoT-enabled house smokes marijuana, and the sensors of the house identify this illegal activity. Who should be informed? The child’s parents? The police because the act was illegal? The company that already collects and owns the data from the house according to the T&Cs of the products? Such a ‘Jimminy Cricket’ style AI system that acts as an extended (if corporate-controlled) conscience could also be developed for log-in systems and in the ambient intelligence of virtual platforms, and again, we’d have to ask who should be informed if a child tries to access dangerous, illegal, or pornographic materials in the Metaverse? Parental controls already exist, but few of them take in data the way AI-enabled systems do.

I’ve recently been doing some AI ethics consultancy work with Swivel Secure, a tech company already exploring digital ID, authentication, and security solutions as AI becomes increasingly integrated into our decision-making systems. As often happens in my AI ethics work – both in consultancy and public engagement – the question of trust quickly arises. Do we trust the gatekeepers we are employing to provide our security? After all, who watches the watchers? Many technological solutions focus on rigidity – cementing ID and creating absolute digital twins. Likewise, the current focus on blockchain solutions for an emerging ‘Web3’ is a part of the same conversation. But both approaches will likely run up against the natural fluidity of digital space and digital identities –the Internet and virtual spaces tend to route around fixed points, like stones thrown in a river. With Swivel, I’ve been interested to see how they are exploring this challenge while providing services that their customers can rely on. The responsiveness of AI-enabled earning systems might be a part of the solution, with the caveat that the public is more and more aware of the risks of sharing their data and quite rightly.

This tension between security and identity is fertile for significant questions around ethics and freedom. And, as with most AI ethics questions, the cultural context and individual responses will play a role in any pushback. In spaces like the Metaverse, we might even see nation-states making predeterminations about identity expression, as we have already seen in what subjects are banned on social media in certain places. We need to be thinking already about the restrictions that digital code will place on our code-switching.