The previous chapter makes it clear that the Internet and communications technology have become important frontiers in the struggle against gender-based violence. Discussing, in particular, the everyday lived experience of violence against women online, it illustrated the extent to which online gender-based violence has been normalized as well as the direct linkages between online violence, offline harms, and intertwined structures of oppression based on patriarchy, Global North–South divides, and other social hierarchies. While Chapter 3 offered insight primarily into the experiences of domestic and interpersonal violence facilitated through online platforms, it left unaddressed the issue of how gender-based violence can be deployed by extremists using technology to further their own ends. The present chapter explores how online extremist violence both deploys gender discourse and uses gendered labour to further its end. This analysis looks primarily at the role of social media, but also addresses the role of messaging apps, decentralized platforms, and other related technologies.

A necessary first step in this exercise is to define both ‘violent extremism’ and ‘extremist violence’, and explore what these terms mean in a gendered context. The definition of extremism is contested and is not always linked explicitly to manifestations of physical violence. While extremism is often associated with radical ideological views, commentators have noted that holding or advocating such views does not necessarily lead to the practice of violence or to a decision to join an extremist group (Griset and Mahan 2002; Horgan 2009; Aly and Striegher 2012; Striegher 2015). Thus, violent extremism is often defined in opposition to terrorism. While ‘extremism’ denotes adherence to a radical ideology, and ‘violent extremism’ denotes adherence to an ideology that views violence as a suitable tool in the pursuit of ideological goals, whether extremist violence is also terrorism can be distinguished by whether an individual actually carries out violence and/or whether they are part of an organized group engaged in that practice (Striegher 2015; UNODC 2018). Extremism, then, is an appropriate conceptual framework for this chapter, which explores how gender-based violence is deployed by a combination of organized armed groups, diffuse extremist movements, and entities that fall somewhere in between.

Applying extremism and extremist violence in a gender-based context can lead to an examination of how extremist groups practice gender-based violence and how they promote an ideology that legitimizes violence as a tool to pursue gendered goals. As seen in this chapter, these goals can include upholding a rigid gender binary, upholding traditional or essentialist gender roles, opposing laws and policies meant to promote the advancement of women, and seeking to disrupt the advancement of women – for example through disinformation or dogpiling1 tactics. In her book on violence against women in politics, Krook (2020) advances a five-part typology of political violence against women that includes physical and sexual violence, but also psychological, emotional, and semiotic violence. While the majority of these terms will be familiar to readers knowledgeable in this area, semiotic violence is perhaps the least familiar and is defined as violence which deploys words, gestures, and images with the purpose of silencing women or rendering them incompetent (Krook 2020). As a delegitimizing tactic, semiotic violence is not necessarily a new form of gender-based violence, but it may be uniquely enabled by the online environment. Memetic campaigns and information warfare tactics (such as the sowing of disinformation) are useful for enabling semiotic violence. As the examples in this chapter will show, the complex landscape of social media networks, messaging and communications apps, forums, and other means of electronic communication also allow semiotic violence to be coordinated by extremist communities in ways not visible to the public. While Krook (2020) applies these tactics to the study of violence against women in politics – defined as those in political office, candidates, journalists, and activists – definitionally her typology can apply more broadly to gender-based violence as deployed against women and those identifying outside the gender binary. There are additionally connections to race/ethnicity, sexuality, disability, and other axes of difference that call for an intersectional understanding of this violence and its effects. In the case of semiotic violence especially, the concept calls upon us to see acts of violence against women in the collective.

The remainder of this chapter explores the application of gender-based violence by extremists in the online space. Beginning with a brief discussion of the historical and legal context, I proceed to an examination of relevant examples in the context of the typology outlined by Krook (2020) and with reference to recent and currently active extremist groups including the Islamic State, male supremacist and far-right communities, and diffuse but politically relevant communities organized around conspiratorial beliefs. These examples build on existing work in this area, but I also explore these themes through a content analysis of open-source content from extremist websites. These examples serve to illustrate several points. First, they demonstrate the frequency and ease with which extremists at varying levels of organizational coherence can use technology to further their aims. Second, they demonstrate that both men and women are involved in the practice of extremist gender-based violence in ways that map onto our understanding of violence in other contexts. Third, they demonstrate a trend towards entrepreneurship and the decentralization of online extremism away from the larger social media and communications platforms that have seen the most pressure to respond to this issue. Fourth and relatedly, I argue that these examples point to a need to revisit existing strategies for the regulation of extremism in the online space.

Gender-based extremism: historical context

While much attention has been devoted to extremism in new media and the emergence of groups and ideologies native to the online space, it is important to note the historical continuities that underlie these trends. Not that much is truly new about misogyny, particularly in the context of political and/or extremist violence. Both gender-based violence in conflict and the use of gendered labour to promote violence are long-established phenomena.

The proliferation of communities and sub-groupings under the banner of so-called ‘male supremacist’ ideologies serves as an illustration of the varied expressions of misogyny and of the migration of these ideas from offline to online spaces. While patriarchy represents an institutionalized form of male supremacy that is centuries old, forms of anti-feminist extremism have existed for as long as there has been a feminist movement. Early feminists in the United States and the United Kingdom were the targets of verbal harassment, harassment in writing (for instance, via mailed letters), physical violence, and sexual assaults. These acts were carried out by security forces and private individuals alike. In the United States, a November 1917 incident (known as the ‘Night of Terror’) in which suffragettes from the National Women’s Party were arrested, tortured, and held in inhumane conditions, became a rallying point by which the National Women’s Party sought to sway public opinion (McArdle 2017; Carter Olson 2021). In Britain, feminists of the Women’s Social and Political Union experienced their own ‘Black Friday’ in 1910, when hundreds of women marching on Parliament were physically and sexually assaulted (Raw 2018). Violence against suffragettes – both in contemporary accounts and since – has been justified by claims that it was a response to militance among suffragette groups, but such arguments do not always hold up to scrutiny. In the case of the Night of Terror, women arrested at the White House appear to have been guilty only of civil disobedience. Some had been arrested repeatedly for ‘obstructing the sidewalk’ in the months prior to November 1917, an offence that had apparently taken on new severity in light of the US entry into the First World War and resultant concerns about domestic dissent (McArdle 2017). In the UK, militant suffragism has been the source of some debate among historians, with accounts diverging on how widespread and how central violence was to women’s movements (Kent 1990; Cowman 1996; Mayhall 2003; Bearman 2005; deVries 2013). Generally, feminist historians have argued that militant violence by suffragettes largely consisted of property crimes – like vandalism and arson – and that such tactics were controversial even within the movements of the time (Fowler 1991; Cowman 1996; Mayhall 2003; deVries 2013).

Leaving aside these debates, anti-feminist or anti-women movements continued to coalesce. The emergence of a transnational men’s rights movement in the 1970s drew on ideas percolating since the late 19th century, including the idea that men were the truly oppressed sex and that legal and political reforms granted rights to women at the expense of men (Rafail and Freitas 2019; Horta Ribeiro et al 2021). From offline activism in the late 20th century, men’s rights activists and advocates have grown into the online space. Grievances expressed within men’s rights communities online today include long-standing topics of discussion – inequitable treatment of men in family law and relationships, the victimization of men by the feminist movement, and a perception that men are discriminated against or are unrecognized by domestic violence laws – and newer concerns – like the perceived mistreatment of men in sexual assault policies on college campuses and persistent efforts to discredit statistics and studies showing gender inequality (Rafail and Freitas 2019).

The movement of men’s rights discourse into the online space in the early 2000s allowed these communities to extend their reach, but also to subdivide. Today, the term ‘manosphere’ has come to encompass the range of online entities catering to men’s interests, including podcasts, video channels, blogs, websites, and communities. Efforts to map ideological divisions within the manosphere have highlighted several salient groupings. These include:

  • Men’s Rights Activists (MRA): An extension of the offline movement, with discussions centred in the areas referenced earlier.

  • Men Going Their Own Way (MGTOW): A community of men seeking to avoid any interaction with women and to minimize their interaction with society at large, which they believe has become dominated by women.

  • Pick-Up Artists (PUA): Men concerned with establishing sexual dominance over women, often advocating the use of deceptive or coercive tactics.

  • Red Pill: Communities that combine pro-men and anti-feminist ideas with a broader range of conspiratorial and anti-government views.

  • Incels: ‘Involuntary celibates’ who believe in male entitlement to sex and express resentment towards both women and a hierarchy of men who they perceive as conspiring to deprive them of sex. (Liang Lin 2017; Fitzgerald 2020; Horta Ribeiro et al 2021)

These communities are transnational, diverse in terms of race and ethnicity,2 easy to find, and – in many cases – closely networked to violent extremist ideologies. Incel communities have arguably received the most popular attention, with incel ideology explicitly linked to the celebration of violence against women and identified as the cause of nearly 50 deaths in incidents across the United States and Canada (Gilmore 2019; Hoffman and Ware 2020; Hoffman et al 2020; O’Malley et al 2020). Incel attacks in the 2010s led Reddit – one of the major platforms where incel communities were based – to ban these groups; however incel forums, communities, and media remain easy to find. This ties in to a larger problem, which is the ease with which individuals navigating the manosphere can be steered from non-violent content to content with extremist, violent undertones. Although friction exists between various manosphere communities (Liang Lin 2017), recent research has argued that boundaries between less-extreme and more-extreme communities are disappearing, with extremist voices increasingly dominating across the manosphere as a whole. In the current environment, purportedly ‘moderate’ or non-violent groups act as feeders to drive users towards more extreme content – a process facilitated by algorithms on sites like YouTube (Fitzgerald 2020; Papadamou et al 2020; Horta Ribeiro et al 2021). Analysts also point out that there is also cross-pollination between violent male supremacist ideologies like incel thought and other communities built around hate and violence, such as neo-Nazis, Identitarians, White supremacists, and conspiracy-based extremist movements (Clarke and Turner 2020; DiBranco 2020; Henshaw 2021a). This paints an expansive picture of the range of extremist groups capable of perpetrating gender-based violence online.

A look at the historical context of gender-based extremist violence would not be complete without also addressing the role women play in advancing extremist messaging. Recruiting and propagandizing have long been important roles for women in extremist movements and armed groups, with women at times serving as ideological leaders in violent movements (Cragin and Daly 2009; Henshaw et al 2019). Cragin and Daly (2009) discuss examples of women in the ‘political vanguard’ of insurgencies, including Augusta LaTorre and Elena Iparraguirre of Peru’s Shining Path, Susanna Ronconi of Italy’s Prima Linea, and Kesire Yildirim of Turkey’s Kurdistan Worker’s Party (PKK). Latter-day examples could also include women of Colombia’s Revolutionary Armed Forces of Colombia (FARC) like Victoria Sandino Palmera (a member of the FARC’s peace negotiating team and, subsequently, a Colombian senator), who defined the notion of ‘insurgent feminism’ as an intersectional, anti-patriarchal, and collective commitment to full gender equality (Sandino Palmera 2016; Phelan 2017). As counterintuitive as it seems, right-wing movements advocating patriarchal structures or male supremacy have similarly been reliant on the work of female ideologues and propagandists. Various authors have explored the roles played by women in American White supremacist movements, including the Ku Klux Klan (Blee 2008; Eager 2008; Darby 2020). Today, much of this messaging has moved online as technology (including social media, podcasts, and video hosting sites) makes it easier for content authors to broadcast to a wider audience. As a result, female propagandists3 have shown up in corners as diverse as neo-Nazi and ultranationalist militias in Eastern Europe, the Islamic State, and the Pakistani Taliban (Lehane et al 2018; Pearson 2018; Trisko Darden et al 2019; Mehran et al 2020; Ingram 2021).

While extremist violence against women fits within the purview of P/CVE policies and is a threat specifically named in at least one resolution in the WPS Agenda,4 institutions have struggled to meaningfully respond. Various authors have pointed out the pitfalls and limitations of P/CVE programming that views women in essentialist ways – with women (especially mothers) portrayed as forces for peace and men (especially young men) as the parties at risk of radicalization (Huckerby and Ní Aoláin 2018; Winterbotham 2018; International Crisis Group 2020; Rothermel 2020). The collective failure of the international community to meaningfully deal with female foreign-born supporters of the Islamic State, many of whom have been left effectively stateless and without due process, further highlights the way in which – in spite of over a quarter-century of gender mainstreaming – state and international institutions are perplexed in the face of a phenomenon that upends essentialist understandings of gender.

Case studies in gender-based violent extremism

This section draws on recent examples and the typology discussed earlier – developed by Krook (2020) – to illustrate the various ways in which ICT and social media facilitate gender-based violence. A special emphasis is placed on violence perpetrated by the Islamic State, by far-right groups including male supremacists and incels, and by conspiracy-based movements. These examples illustrate that extremist gender-based violence is both widespread and takes varied forms including those contributing to serious real-world harms. In this way, the types of violence identified by Krook (2020) are not mutually exclusive; to some extent, they are mutually reinforcing. Beyond simple gender binaries, these examples show that violence is intertwined with multiple social hierarchies – disproportionately impacting members of ethnic and religious minority groups, women with disabilities, and members of the LGBTQI+ community. Furthermore, women as well as men take an active role in facilitating these acts, a fact that should motivate a re-thinking of gender approaches to P/CVE.

The Islamic State

The Islamic State has become closely associated with discussions about extremist violence against women. Technology played an essential part in both its transnational recruiting efforts and in its campaign of violence against marginalized groups, including ethnic and religious minorities and the LGBTQI+ community. Scholars have described a complex system of gender relations within the Islamic State. Overall, thousands of women from dozens of countries became affiliated with Islamic State between 2013 and 2018, accounting for approximately 13 per cent of all foreign recruits (Cook and Vale 2018). In addition to these, many women native to Islamic State-held territories either willingly or unwillingly became collaborators with the organization (Moaveni 2015, 2019). This has led some to describe the female population of the Islamic State caliphate as one of in-groups and out-groups (Margolin and Winter 2021). Sunni Muslim women who were ‘willing and grateful participants’ in the caliphate were generally considered part of the in-group, though their movements and activities remained closely monitored by the organization (Margolin and Winter 2021). Freedom of movement, for example, was restricted; however, women could gain certain access and privileges through participation in a narrow range of occupations deemed acceptable in the caliphate, including as doctors and nurses and as part of the religious police (Moaveni 2015; Margolin and Winter 2021). Even so, escapees from Islamic State territory have alleged that there was differential treatment of women who joined Islamic State from abroad – especially from Western countries – and Syrian and Iraqi women native to the area, with the former being accorded privileges like expanded Internet access or better weapons in the religious police (Moaveni 2015, 2019). Through their participation in the religious police and via their presence online, women affiliated with Islamic State played important roles in the transmission and reinforcement of norms of gender in the Islamic State (Moaveni 2015, 2019; Pearson 2018).

For those in the out-group, including women from Yazidi, Christian, or Druze backgrounds, the LGBTQI+ population, and Muslims who did not ascribe to the extremist interpretation of Islam adopted by the Islamic State, life was considerably more brutal. Many of these communities – including Christians, Jews, and Shia Muslims – could be subject to taxes, seizure of property, or forced conversion (Al-Dayel and Mumford 2020; Margolin and Winter 2021). The Yazidi minority were the foremost group singled out for the particular subjugation of enslavement. Following the conquest of Yazidi communities, Al-Dayel and Mumford (2020) note that gender was the primary factor in determining the fate of individual community members, with age being a secondary consideration. While adult men were usually executed, women, girls, and boys under the age of eight were more often destined for slavery.5 Enslaved persons were sold or ‘gifted’ to Islamic State fighters, with women and girls often sexually enslaved – a practice tolerated and regulated by the Islamic State (Al-Dayel and Mumford 2020; Al-Dayel et al 2020; Margolin and Winter 2021).

ICT including social media directly facilitated slavery and sexual violence against Yazidi women and girls and as well as other ‘enemy’ populations of the Islamic State. Estimates suggest that up to 9,000 slaves were trafficked within the Islamic State (Al-Dayel et al 2020). Although physical slave auctions were usually held at a few specific sites, slaves were also trafficked via online auctions and through online groups. Platforms including Facebook, WhatsApp, Telegram, and Signal were among those used to sell slaves (Hinnant et al 2016; FIDH/Kinyat 2018; Al-Dayel et al 2020). At least one report suggests that the use of these platforms enabled the expansion of slave markets beyond ISIS territory, pointing to the alleged sale of slaves to buyers elsewhere in the Middle East (Al-Dayel et al 2020). Technology may likewise be extending the life of the slave trade beyond the territorial defeat of the Islamic State. As of 2020, an estimated 3,000 Yazidi women and children were still missing (Barkawi 2020; Hutchinson 2020). Yazidi families have alleged that, in the rush by foreign fighters to abandon Islamic State strongholds, some women held as slaves were sold to criminal gangs and subsequently trafficked out of Syria (Cornish 2019). Based on available data, research estimates that Islamic State-related interests could stand to gain millions of dollars in additional funding through the sale or ransom of those still held captive (Hutchinson 2020).

Technology similarly facilitated and/or amplified violence against sexual minorities and those whose gender identity falls outside the gender binary. Scholars have argued that the Islamic State’s publicly circulated ideologies on gender and sexuality served a signalling or outbidding function, both distinguishing the Islamic State from other armed groups in the region and seeking to establish it as the most serious or committed among jihadist players (Tschantret 2018; Szekely 2020). The Islamic State was not alone in targeting the LGBTQI+ community as a form of signalling. Tschantret (2018) identifies 13 non-state armed groups that engaged in targeted homophobic killings between 1985 and 2015. Of these, the majority were in the Middle East and North Africa region and four – including the Islamic State – operated in either Iraq or Syria following the 2003 invasion of Iraq by the United States.6 As of 2017, human rights advocates had documented more than 60 instances of targeted killings, torture, sexual violence, and other violent acts against members of the LGBTQI+ community in just one area of Islamic State-occupied Iraq (Feder 2017). The Islamic State is also alleged to have carried out targeted homophobic or transphobic violence in parts of North Africa and Pakistan where it is active (Kilbride 2015; Tschantret 2018).7

While even an accusation of being gay, lesbian, or transgender may have been enough to provoke targeted violence from the Islamic State, technology was used to support such allegations. Surveillance and searches of mobile phone data were used by the Islamic State to identify members of the LGBTQI+ population (Variyar 2014; HRGJ/MADRE/OWFI 2017). The vulnerability of such data is a particularly important issue, as LGBTQI+ communities in the region often communicate through chat rooms, social media, or messaging apps, given the social stigma associated with homosexuality (Hawley 2015). Executions based on sexuality or gender identity were in turn broadcast on social media as a reinforcement of Islamic State views and punishments (HRGJ/MADRE/OWFI 2017; Tschantret 2018). While most documented instances involve men and boys being executed or tortured, executions of women and children accused of similar offences have also been recorded (HRGJ/MADRE/OWFI 2017).

Taken together, these examples show how a well-resourced extremist group can engage in gender-based (and sexuality-based) violence in ways that cut across the typology of violence against women outlined by Krook (2020). It also shows the roles technology may play in facilitating or amplifying the effects of such violence. In the case of the Islamic State, evidence shows that technology expanded the reach of Islamic State violence, both geographically – as in the case of the expansion of the slave trade – and psychologically – as in the use of broadcasted executions of those accused of homosexuality. Examples like those identified here suggest that technology providers – including social media companies – were often reactive rather than proactive in preventing abuses of technology by extremists.

Male supremacists and the far right

Elsewhere, technology has enabled groups that promote ideologies advocating violence against women. Most notable is the emerging terror threat posed by incels and other male supremacist groups. Incel communities are a subset of the larger ecosystem of online misogynist groups. While the term ‘incel’ originated as a way to refer to Internet users who were single and seeking community, it has evolved to refer to male extremists – including those who advocate for violence against women. Hoffman and Ware (2020) estimate that approximately 50 people have been killed in incel-motivated attacks in the United States and Canada, with additional plots disrupted elsewhere. This and other assessments of incel-motivated violence cite highly public incidents like mass killings in Isla Vista, California in 2014, Toronto in 2018, and Tallahassee, Florida in 2018.

Experts argue that incels should be regarded as a salient terrorism threat in the sense that they converge around a defined ideology, coordinate through online communication, and carry out real-world violence with the assistance of opaque networks (Hoffman and Ware 2020; Hoffman et al 2020). The defining elements of incel ideology include: (1) the belief that women are naturally evil; (2) the exaltation of traditional male gender roles; (3) a belief in the oppression of men in modern society; (4) a belief in the existence of a ‘sexual market’ in which hierarchies of men exist and where more sexually successful men and women act in concert to deprive other men of sex; and (5) the legitimacy of violence as a response to these dynamics (O’Malley et al 2020).

At least some of the ideological points that define incels align with views frequently found in other extremist communities of the far right. Discourse across far-right extremist groups regarding the ‘natural’ supremacy of men and right to sexual access is pervasive. Furthermore, although academic interest in incels seems to be focused on instances of targeted killings, male supremacist violence may be expressed in ways that are not immediately visible to scholars of political violence, such as intimate partner violence or sexual assault. PUA communities, for example, with their advocacy for the use of deceptive or coercive tactics against intimate partners, challenge the boundaries of what we might call extremist violence.

Male supremacist communities can also have entangled relationships with other, more formal violent groups, including militias and hate groups. Mattheis and Winter (2019), for example, discuss principles of male supremacy in Identitarian discourse in Europe. They find that Identitarianism, defined as ‘an ethnopolitical ideology that is committed to ending multiculturalism’, incorporates views specifying the belief that feminists are actively working to oppress men, that society should return to traditional gender roles (that is, with women primarily engaged in child-bearing and care-giving), and that modern society has been made possible only through patriarchy – a system in which men perform the labour of social advancement while women passively benefit (Mattheis and Winter 2019: 6, 13). These views overlap with principles of incel ideology. Male supremacist views have additionally become entangled with White supremacist and neo-Nazi ideology. Commentators have noted that Anders Breivik’s attack in 2011 was motivated by anti-feminist and homophobic as well as xenophobic views (Moyn 2018; DiBranco 2020). Andrew Anglin, founder of the anti-Semitic and neo-Nazi website Daily Stormer, has also sought to cultivate readership through the advancement of male supremacist views (SPLC nd; DiBranco 2020). Women have similarly taken on some important roles in advancing male supremacist views. Somewhat paradoxically – given that the emphasis on traditional gender roles should constrain their involvement in the public sphere – female ideologues on the far-right have played an important role in sanitizing extremist messaging while extending its reach to new communities, such as the ‘tradwife’ movement, a community of women celebrating traditional homemaker roles (Darby 2020).8

As with the Islamic State, violence arising from male supremacist communities cuts across the typology introduced earlier in this chapter. While the deaths of dozens of people across the United States and Canada in incel-motivated attacks demonstrates the potential for deadly physical violence arising from male supremacist communities, less visible are the ways that male supremacist communities perpetrate psychological, economic, and semiotic violence against women. In a broad sense, organized efforts by groups associated with male supremacy have sought to discredit or silence women’s voices in strategic ways. These include efforts by far-right extremists to co-opt the #MeToo hashtag and to derail feminist conversations by spamming hashtags like #TakeBacktheTech (a hashtag highlighting gender inequality in the tech industry) with offensive messages and images (Amnesty International 2018; Ebner and Davey 2019). Extremists have become adept at organizing such campaigns in ways that evade content policies enforced by social media companies. For example, groups including neo-Nazis, Identitarians, and White supremacists have used Discord channels and sites like 4chan and 8kun to generate memes and videos for use on more mainstream platforms (Davey and Ebner 2017; Singer and Brooking 2018). Strategies that include the use of coded language, the hijacking of hashtags, and fake accounts have all been used to broadcast extremist messages without necessarily violating terms of service (Davey and Ebner 2017; Ebner and Davey 2019). The use of coded language and symbols in online harassment has been cited as a particular problem for content moderators, as work on gender-based harassment notes that human content moderators are often unfamiliar with niche hate symbols – such as those used to target disabled women (Jankowicz et al 2021).

Beyond these general examples, the targeted harassment of women by extremist groups represents a trend resulting in psychological and emotional violence. An array of women including political office-holders, activists, and journalists have been on the receiving end of gendered abuse. Memetic campaigns and conspiracy theories directed at Hillary Clinton in the lead-up to the 2016 US elections are one example, with authors tracing the origin of some memetic campaigns that employed gendered discourse to extremist communities online (Mitew and Wall 2017). Other well-known targets include German Chancellor Angela Merkel, who was the focus of campaigns organized by groups including neo-Nazis, Identitarians, and other far-right figures in the lead-up to the 2017 German elections. At least some of this critique focused on gender issues, specifically the perceived links between German refugee policy during her term and sexual assaults against German women (Davey and Ebner 2017). US Vice President Kamala Harris was the target of transphobic conspiracies circulated online by QAnon in the lead-up to the 2020 US presidential election (specifically, the rumour that she was a transgender woman) (Jankowicz et al 2021). One study from the Inter-Parliamentary Union (IPU) found that 41.8 per cent of women serving in parliaments worldwide were subject to conspiracies, rumours, or harmful images circulated on social media. A slightly higher percentage of respondents, 44.4 per cent, reported receiving threats of rape, assassination, assault, or abduction – which were most often transmitted via email or social media (IPU 2016).

Targeted harassment goes beyond women at the highest levels of politics. Online harassment campaigns by extremists against women activists, journalists, and professionals – individuals who generally lack the protections that may be associated with holding political office – are particularly insidious and have spilled into real-world acts of violence and intimidation. Campaigns organized by the Daily Stormer, for example, have targeted several visible and/or politically active women. Journalist Julia Ioffe was targeted by the Stormer’s so-called troll army in 2016 after writing a story critical of soon-to-be First Lady Melania Trump, leading to a campaign of anti-Semitic abuse (O’Brien 2017). Real estate agent Tanya Gersh – who was accused of engaging in a confrontation with the mother of White nationalist Richard Spencer – was the target of a campaign in which she was doxed, enabling far-right sympathizers to harass her and her family by phone and email hundreds of times per day. This incident escalated to threats of an armed protest in her home town before law enforcement and local government took action (O’Brien 2017). Taylor Dumpson, the first Black woman to be elected student body president of American University, was another target of the same organization. Harassment of Dumpson progressed from online threats to racist displays on the campus of American University (Schmidt 2019). In each of these cases, the women affected alleged psychological and economic harm – and the legal system agreed. In 2019, a US court awarded Dumpson damages on the grounds that harassment fomented by the Stormer caused a deterioration of her physical and mental health and inhibited her access to an education (Schmidt 2019). Gersh was awarded US$14 million in damages against Stormer publisher Andrew Anglin in 2019 on the grounds that he had acted with malicious intent in encouraging harassment against her (Farzan 2019). At the same time, these cases show the limitations of the legal system in responding to online violence against women. More than a year after the judgment in Gersh’s case, her attorneys stated in a court filing that Anglin had not paid any part of the judgment and had dropped out of communication, his whereabouts unknown (Kunzelman 2021).

Discussion

The examples in this section demonstrate some common threads in extremist uses of technology to facilitate gender-based violence. First, there is little question that the Internet and communications technology has become an important tool in facilitating both broad-based and targeted violence against women. While the nature of this violence varies between communities, the impacts are often intersectional – disproportionately impacting women of minority ethnic or religious communities as well as members of other marginalized groups. The examples of the Islamic State and far-right or male supremacist communities further show the shortcomings in responses by actors like technology companies and law enforcement. Technology companies have often been reactive, failing to anticipate or immediately perceive how their tools might be used in supporting violence. There is also evidence that there are ongoing gaps in their awareness of varying forms of harassment, in particular the use of niche symbols and language to target specific groups. On the side of law enforcement, the transnational nature of some extremist communities complicated meaningful prosecution of cases involving extremist violence against women. In some cases, women targeted by organized harassment campaigns allege that state governments themselves have been party to the harassment, an issue further explored in Chapter 5 (Jankowicz et al 2021). Taken together, these factors suggest a need to take extremist gender-based violence seriously as a transnational issue that requires a shared recognition of the problem and commitments from a variety of stakeholders to address the issues at hand.

There are, of course, limitations to comparing a well-organized and highly resourced extremist group like the Islamic State to the more diffuse range of communities associated with male supremacism. While the online footprint of the Islamic State was substantial, at one point encompassing tens of thousands of social media accounts on Twitter alone (Berger and Morgan 2015), there is nonetheless evidence that it sought to regulate online communications in the caliphate. In the case of the slave trade, rules regulating the treatment and transfer of slaves were promulgated by the Islamic State (Margolin and Winter 2021). Norms about gender relations were understood to carry over into online spaces, while the control over women’s access to the Internet – and, in particular, the differential access accorded to women who joined the Islamic State from abroad – suggested some degree of oversight and consideration as to organizational messaging (Moaveni 2015; Pearson 2018).

The semi-centralized, controlled strategies practised by the Islamic State (at least, during the height of its success) stand in comparison to the more nebulous and decentralized presence of far-right groups. As noted in this chapter, cross-pollination among extremists of varying stripes – including neo-Nazis, Identitarians, White supremacists, incels, conspiracy-minded extremists, and others – is a defining feature of the landscape. Also defining is the broad range of fora through which they operate, including: mainstream social media; small and emerging social media platforms; messaging apps; online message boards; file sharing services like Dropbox; and more. These dual tendencies of cross-pollination and decentralization will likely define the landscape of online extremism in the coming years. While larger social media platforms have sought to become more proactive in policing content, they are supplemented by a growing range of players who either cannot or will not monitor user content. While the organization Tech Against Terrorism (an initiative launched by the UN Counter Terrorism Executive Directorate) offers a mentoring programme for tech startups looking to develop policies to combat terrorist use of their services, participation in this programme requires a good-faith commitment to human rights standards, transparency, and the enforcement of content standards and content moderation policies (Tech Against Terrorism 2020). A subpopulation of tech companies have defined themselves in opposition to these principles, instead promoting themselves as havens of unfettered free speech. This includes social networks like Parler, Gab, Rumble, and MeWe, many of which have capitalized on aggressive content moderation by more established platforms, promising an alternative for right-wing and conspiracy-minded users who view moderation as censorship (Isaac and Browning 2020). Holding these platforms responsible for disinformation or even violent speech has proven difficult, as they are usually backed by web hosting services who further support the mission of free speech (Allyn 2021; Nicas 2021).

Well-intentioned services, too, may find themselves co-opted by extremists where they lack the full capacity to monitor and control content. This was at least partly at issue in the case of the livestreaming platform DLive, which found itself at the centre of debate after the events of 6 January 2021 when it was used by prominent extremists to livestream the storming of the US Capitol. The site, which allows users to livestream and to accept donations via cryptocurrency, was originally created as a rival to the livestream gaming platform Twitch. It relied largely on self-moderation by content authors, a hands-off approach that allowed it to gradually become co-opted by White nationalist and far-right influencers banned by other platforms (D’Anastasio 2021). Communications among DLive executives and employees show that the company was aware of its issues with extremism prior to the 6 January riots, but that it struggled with the potential practical and financial implications of banning extremists. Ultimately, it relied on the hope that its community of users would police itself (Browning and Lorenz 2021).

This diffusion of extremist voices to an array of smaller platforms has contributed to what analysts have called the development of a ‘big tent’ conspiracy mindset, especially in the wake of the attack on the US Capitol in January 2021 (Argentino et al 2021). Characteristic of this trend is the cross-pollination of ideologies, with some established extremist actors from violent groups actively seeking to reach out to newcomers to platforms like Telegram, looking to steer them towards more extreme channels and content. As a result, Argentino et al (2021) argue that we are witnessing the development of closer ties between militias, conspiracy-based groups like QAnon, and an array of hate groups. These ties often converge around shared narratives, among them misogyny and anti-feminist conspiracy theories, homophobia, and transphobia. The following section explores how these trends play out in one corner of the manosphere, showing both the decentralization and the cross-pollination of extremist discourse.

Into the manosphere: connections between misogyny, conspiracies, and extremism

As noted earlier in the chapter, the ‘manosphere’ is a collective term applied to a variety of sites catering to men’s interests and encompassing several (supposedly) distinct communities. While not all of these communities advocate violence, research suggests that extremist content remains easy to access in the manosphere – even via popular social media platforms like YouTube, Discord, and Reddit. It also suggests that divisions between self-identified manosphere communities like Men’s Rights groups, incels, and others are less firm than these groups would like to suggest (Papadamou et al 2020; Sharpe 2020; Horta Ribeiro et al 2021). One factor facilitating the ease of movement between more and less extreme groups has been the diffusion of violent misogyny to purpose-built sites and less-regulated outlets (including 4chan and 8chan/8kun). Researchers attribute this diffusion to actions by social media providers against incels – especially, bans on incel communities, which led to the creation of many alternate sites as free speech havens (Horta Ribeiro et al 2021). This leads to an environment in which allegedly less extreme misogynist communities are allowed to retain a presence on mainstream social media, in turn using this presence to share outlinks to more extreme communities who have been quarantined or deplatformed for violating content policies.

To date, incel communities have been the groups most closely associated with violence in the manosphere, prompting bans on some incel communities – especially on Reddit, where these groups were most active. These bans prompted the growth of purpose-built forums catering to incels. Other manosphere communities – Men’s Rights advocates, PUAs, and Red Pill communities – appear to be viewed as conspiracy-based but not as potentially violent; they have correspondingly been allowed to maintain a presence on Reddit, YouTube, and some other major platforms.9 The existence of defined subreddits or channels devoted to these groups provides the illusion of barriers, suggesting a distinct population of users and moderators within each group, separated by some ideological firewall. However, researchers have called into question the rigidity of such barriers, offering evidence that users are funnelled from less-extreme to more-extreme groups, with membership overlapping and with more extreme voices gaining purchase across communities (Fitzgerald 2020; Horta Ribeiro et al 2021). At the time of this writing, the most well-known incel and MGTOW communities were banned from Reddit, while larger PUA and MRA subreddits existed as open communities, accessible to any user and publicly viewable without logging in. Subreddits associated with Red Pill ideology existed but were quarantined. ‘Quarantines’ on a subreddit take the form of a simple content warning when a user enters these communities. Upon acknowledging receipt of the warning and verifying status as a registered Reddit user, the communities are accessible without further restriction.

Subreddits within the manosphere are used to share outlinks to sites and forums that advertise themselves as places where users may speak more freely about gender politics and other matters. In an earlier work, I conducted a content analysis of one dedicated ‘Red Pill’ forum I accessed through outlinks from Reddit, which hosted (among other topics) an uncensored discussion of conspiracies related to the 2020 US presidential election (Henshaw 2021a). Subsequent to that analysis, I returned to analyse approximately six weeks of posts from that same forum, to further explore how it demonstrates both the decentralization of extremist discourse and the cross-pollination of extremist narratives. The analysis of these posts further demonstrates how entrepreneurial users advocating extremist beliefs can navigate content moderation policies in ways that allow them to retain a foothold on mainstream social media platforms while also steering users towards more extreme communities.

This particular Red Pill forum (‘The Red Pill’ or TRP) is a standalone site that also maintains what it calls an ‘official’ presence on YouTube, Twitter, and Reddit,10 as well as a Discord channel which is occasionally advertised to users. The forum has some nominal content policy that includes the following items:

  • no illegal images or images depicting illegal activity;

  • no suggestion, incitement, or records of breaking federal or state law;

  • no threats;

  • no promoting violence or injury;

  • no racism or hate speech;

  • no doxxing;

  • no trolling;

  • no profanity in usernames.

While TRP does not specifically disallow women, its terms of service suggest that anyone who does not promote ‘masculine interests’ or who challenges its users and their ‘shared goals’ will be advised to leave the site. In both the terms of service and a ‘frequently asked questions’ document, the founder/moderator makes clear that the site interprets ‘Red Pill’ as an ideology relevant to ‘sexual strategy’ and a discussion of ‘sexual dynamics’ – that is, a foundation specifically relevant to gender. The site appears to sustain itself via referral links on the platforms noted earlier (especially, Reddit), by a system of bitcoin donations, and by use of a crowdfunding model on a Patreon site. Users of Brave – a privacy browser – can also give tips to the site in the form of Basic Attention Token, a cryptocurrency. All material on the forum is publicly viewable, and I did not have to create an account or engage with users to follow this content.

Topics covered on the forum include Red Pill ideology, politics, investments, and general chat. As mentioned, I chose to focus on the sub-forum discussing the 2020 US election and the presidency of Joe Biden because of the politically relevant nature of the discussion. In a period covering late April to early June 2021, I monitored about six weeks of activity in this forum, totalling 305 posts and replies with unique contributions from 24 users.11 Though the sub-forum was about US politics, the geographic location of users was unclear. Most appeared to be in the United States, although two identified themselves as being located in Europe. While some analyses of the manosphere have relied on computational text analysis (Papadamou et al 2020; Horta Ribeiro et al 2021), I chose to give a close reading to a smaller number of posts. As noted by Rafail and Freitas (2019), the sharing of information is a substantial part of dynamics in the manosphere. This variously takes the form of the transmission of links, screenshots, embedded videos, and other graphic content that can be difficult to capture in an approach reliant solely on text mining. By reading and coding each post, I hope to better capture the complex nature of the discussion.

Unsurprisingly, given the nature of the board and the site in general, the resulting data show that election and anti-government conspiracies were widespread. Approximately one in every five posts on the board referenced either election-based conspiracies or economic conspiracies. Common among the economic theories was the notion that the government is attempting to devalue citizens’ currency by printing more money – a central bank conspiracy with long historical roots which is also common in cryptocurrency communities (Golumbia 2016).

You’re being bled out. They will provide as many excuses as necessary about why your dollars no longer buy things. … Every time that something has been contrived there will be a permanent excuse as to why your pathetic paper dollars no longer are good for anything.

Even more salient than these topics, though, were the expression of anti-vaccination, anti-mask views and the circulation of disinformation about the COVID-19 pandemic, which accounted for 23 per cent of posts. At various times, different users (or, sometimes, the same users) alternately claimed that the pandemic was not real, that it was real but planned by elites and/or China, that masks were ineffective, that vaccines were ineffective, or that vaccines were dangerous for reasons including infertility, the potential to kill recipients (especially children), or because they were used for mind control. Anti-vaccination conspiracies were widespread on social media prior to COVID-19, but during the pandemic researchers found that existing networks for anti-vaccine conversations also became conduits for spreading a variety of conspiratorial views including conservative conspiracy theories (Jamison et al 2020).

Don’t listen to propaganda polls on vaccination numbers. They lied about everything else. … Go out and actually talk to people, I have. Most are afraid of the shots.

Many conspiratorial ideas were supported by similar calls for fellow users to do their own ‘research’ or consult the ‘evidence’ provided by others on the forum. Sources cited in the portion of the conversation I followed included programming on Fox News, talk radio shows, podcasts, posts on other social media communities frequented by those with far-right views like Gab, and assorted screenshots of unknown origin. At one point, a user did cite a medical journal article on the supposed ineffectiveness of masks, but this user misrepresented the study’s findings. The article in question only argued the ineffectiveness of mask mandates and did find mask usage effective in slowing the spread of transmission of COVID-19.

While these vignettes show the interplay among conspiracy theories, of greatest interest to this text are discussions of violence and links to misogyny. Considering the nature of the forum, the number of posts on this sub-forum explicitly making misogynist statements was small, only about 6.5 per cent of the total. However, much of that was discourse disparaging or casting suspicion upon female politicians. Female leaders and/or political figures singled out on the forum included: US Vice President Kamala Harris, Congresswomen Nancy Pelosi and Alexandria Ocasio-Cortez, Senator Kirsten Gillibrand, Chicago Mayor Lori Lightfoot, White House spokesperson Jen Psaki, Michigan Governor Gretchen Whitmer, and US Assistant Secretary for Health Rachel Levine. By comparison, the number of male politicians singled out for critique or conspiracies was much smaller. Canadian Prime Minister Justin Trudeau and US President Joe Biden were mentioned multiple times; UK Prime Minister Boris Johnson and US Senator Joe Manchin were also mentioned but only once each.

Previous studies have noted that female politicians are frequently subject to online harassment or conspiracy theories, with women of colour disproportionately targeted for abuse (IPU 2016; Amnesty International 2018; Jankowicz et al 2021). That seemed to be the case on this forum as well. In particular, the focus on Dr Rachel Levine, a transgender women serving at the time as an assistant secretary in the Department of Health and Human Services, also pointed to a virulent trend towards homophobic and transphobic speech. Discourse about Levine specifically has been noted as a common theme across extremist channels and forums, and is seen as a narrative related to preoccupations with threats to the gender binary or masculine gender roles (Argentino et al 2021).12 On this particular forum, there were more explicitly anti-LGBTQI+ posts (9.5 per cent of all posts) than there were openly misogynistic posts. The existence of an entire other sub-forum for the discussion of gender issues may have impacted this balance, as users specifically discussing gender roles might gravitate to those other discussions; yet concerns over LGBTQI+ rights and transgender rights in general seemed a significant preoccupation among the users. Racial issues were another a significant topic of discussion (8.9 per cent of posts), including condemnation of Black Lives Matter as a violent group and generally anti-racist or anti-immigrant sentiments.

Just over 6 per cent of the posts I coded made reference to political violence, were an incitement to violence, or referenced a coming or ongoing war. This is a small number of the overall posts but is nonetheless concerning. Just four users authored the majority of posts containing violent rhetoric. In previous analysis of this forum, I noted that users referred to a ‘Civil War’ in the United States and to the country as being ‘at war’ in the wake of the 2020 presidential elections (Henshaw 2021a). This discourse persisted on the forum months later. There were allusions to civil war in the United States, to a coming world war, and to a need to prepare and defend oneself.

Lots of people expect the civil war. … There are already talks of how to cut off the cities the exact moment they try stuff.

First it was the revolutionary war, then it was the civil war, then it was WW2. Next it will be something of equal scale we haven’t seen in 3 generations.

Recourse to violence was justified by this environment of war and by the perceived erosion of traditional security institutions, like the military and law enforcement. Posters repeatedly claimed that the police and military were being purged in advance of some operation to seize property, seize guns, or execute opponents of the government. Some users expressed a fear of being SWATted, that is, harassed or killed by security forces under false pretenses, presumably by government agents looking to silence critics.
User 1:
We will all get swatted one at a time.
User 2:
They don’t have to SWAT all of us, just enough to send a signal.
In the face of violent threats, posters justified violent responses, urging one another to arm themselves, stockpile, and be prepared to fight:

It doesn’t matter what Bitcoin is worth or how many millions of dollars you have in the bank if you don’t know how to grow food and your hungry socialist neighbors are rounding up anybody who looks like a productive capitalist and shooting him.

To some posters, anyone with opposing political views was a legitimate target:

No one has any concerns about lining these crazies up against a wall. Even the conservative soccer moms. … If liberals are left in power they will continue to proceed as they have and destroy us until there is nothing left.

Statements like these deploy the discourse of radicalization through dehumanization and the portrayal of existential threat. Again, a small subset of users dominated these discussions, but they expressed these views without visible opposition from others. While I did observe users debating the accuracy of vaccine (mis)information and some economic conspiracies, posts expressing violent rhetoric did not receive a strong rebuke. This points to a systemic failure to moderate the site in any meaningful way. Despite the nominal content policy mentioned earlier, there were clear and repeated instances of hate speech and incitements to violence. This combines with earlier observations of the same forum in which I noted that forum participants were sharing/reproducing images of crimes taking place at the US Capitol on 6 January 2021, another apparent violation of the terms of service. In spite of all this, as noted, this community is allowed to retain a foothold on mainstream social media platforms, with some of the same individuals apparently running those communities and using those sites to advertise links to this offsite location.

These examples serve to illustrate how online misogyny and extremist discourse – even beyond gender-based extremism – are intertwined. They further show how misogynistic communities beyond incels demonstrate violent propensities. Debates over whether to more closely regulate the manosphere, in particular on major social media platforms, are often boiled down to free speech issues. The foregoing examples show the misleading nature of these arguments. On the one hand, social media is not the public square. Platforms are allowed to create and enforce terms of service. There is also an expectation of good faith among platforms and their communities of users. When users access social media sites with the purpose of advertising and recruiting for extremist outlets, that should constitute a clear violation of most platforms’ terms of use. Indeed, it seems that a number of manosphere communities in places like Reddit have already been on the receiving end of warnings and/or measures like quarantines. Rather than properly regulating their communities in response to these sanctions, though, rogue actors have continued to find new ways to violate the spirit – if not the letter – of these policies. Where content creators at offsite locations create nominal terms of service, then consistently fail to enforce them, that is clearly a bad faith act. Finally, the very notion of creating sites as ‘free speech’ havens that promise no censorship, but then a priori banning large groups of people – including women, the LGBTQI+ community, and/or anyone who disagrees with the dominant politics – is a contradiction. None of this is to say that men should not have distinct spaces online; indeed, many such communities do exist without devolving into violent discourse. But the experiment in allowing misogynistic groups to thrive in a largely unregulated environment has evidently failed to rein in violent rhetoric and mis- or disinformation. Given the propensity for violence demonstrated by incel communities, there seems to be ample justification for discussions about more closely regulating the manosphere as a whole.

Responding to extremist gender-based violence

How can the problem of gender-based extremism be dealt with? The debate on this subject is also distorted through the fun house mirrors of online discussions. As noted, the call for free speech online has been used as a rallying cry by those wishing to forestall any discussion of content moderation. In a crowded marketplace of forums, messaging apps, social media providers, and file-sharing or media-sharing sites, taking any action that could be branded as censorship becomes both an economic and a moral decision on the part of technology companies. Too often, financial considerations have guided efforts to combat online extremism. A 2021 audit of the GIFCT, a joint initiative established in 2017 by Facebook, Twitter, Microsoft, and YouTube and (as of this writing) consisting of 17 participating member companies, found that the group’s efforts too often focused on the low-hanging fruit of targeting high-profile extremist organizations. Islamic communities online, for example, were aggressively policed by platforms for potential extremist content, while right-wing and White supremacist groups were often overlooked. This problem was further complicated by virtue of the number of far-right communities as well as the proliferation of problematic content among individual users rather than easily identifiable groups (BSR 2021). This focus on high-profile activity was also seen as a symptom of Western biases among technology companies, with most GIFCT members concentrated in the United States and/or Europe. The audit concluded that the aim of combating online extremism on a global scale could not be accomplished without buy-in from digital providers outside of the United States and Europe, suggesting that tech companies might also benefit significantly from capacity-building efforts (BSR 2021). The P/CVE community writ large similarly struggles to craft solutions to extremism that avoid stereotypes, with critics noting that efforts lean heavily on Islamic extremism while ignoring other violent ideologies. The role of women in spreading extremist views is another overlooked area, as young men are the primary referent of many initiatives (Winterbotham 2018; Rothermel 2020). Much more remains to be said about this issue, and Chapter 6 explores some spaces where innovators – many of them women – are involved in novel approaches to combat online extremism and harassment. Before that, though, this analysis turns to a related issue: How states also deploy technology for gendered harms.