Dictatorship and Beyond: Rebellion and Refugees in Central America

Written by Jack Bennett.

With the rise of neoliberal globalisation from the 1970s, national boundaries are purportedly more fluid to allow for the greater movement of people and commodities. For economic and political refugees from Central America, however, these national borders have not been nearly as fluid. Growing inequalities in connectivity created uneven mobility of labour and capital. Against the backdrop of the Cold War in Latin America, continuous waves of refugees fleeing first the political violence of the counter-insurgency terror and then economic and social violence of neoliberal insecurity and the drug trade can be traced. By focusing on the El Salvador and Guatemala Civil Wars of the 1980s as definitive flashpoints for the Latin American Cold War, the scale of refugee crises mushroomed. Following the end of the Cold War and the ensuing Peace Processes in the wake of civil war, migration and asylum continued to remain a major issue throughout the Americas. 

Importantly, Latin America was a transformative ‘hot’ region in the global Cold War. Nevertheless, the roots of these ideological-military conflicts can be traced back to the Mexican Revolution of 1910. A conflict which raised fundamental questions regarding land ownership, political liberty and divisions of labour; through internal ideological struggle over home to overcome deep socio-economic inequalities and forge more inclusive nations. This marked the beginning of a century shaped by conflicts to forge more modern nations, and how to engender both national and international equality. Latin America critically became the ‘backyard’ of the United States in the broader, global conflict with the USSR. 

It was here in Latin America where the failure in electoral and democratic approach to produce this modernisation of sovereignty which prompted the eruption of armed conflict. This highlights the local dynamics in global contexts. For instance, Guatemala in 1954 saw a CIA-supported military coup overturn the previous decade of democratic reform in order to prevent the encroachment of communism. Contrastingly, in Cuba in 1959 the 26th July Movement overthrew the populist dictatorship, declaring a socialist revolution, pursuing the development of a more inclusive nation. Emerging from these conflicts was a paradigmatic change in the movement of people throughout the Americas as migrants, refugees and asylum seekers. In 1967, the U.S. signed the United Nations Protocol Relating to the Status of Refugees, which was rebuffed in 1980, just as the Cold War was beginning to heat up in the region, through the Refugee Act. The status of ‘refugee’ was influenced by national security anxieties, public perceptions in the United States as well as gender. 

The Guatemalan Revolution took place in 1954, and as previously mentioned, eroded the previous decades democratic developments. During this period, in a nation of 12 million people, experienced significant agrarian reform, despite being heavily reliant on the cash crop trade of bananas, coerced, indentured labour of indigenous groups was abolished. Then the coup took place, ending this democratic road to modernisation. In the aftermath, elite Guatemalans supported the United States and the little resistance experienced to external imposition of political order was swiftly suppressed. In response, the people of Guatemala were faced with the reality that bringing reformers to political power was fundamentally unworkable given the global geo-political climate. Young people capitalised upon this and took up arms, developing strategic military operations in rural areas in resistance. What emerged was the protracted and bloody Guatemalan Civil War (1960-1996), a conflict which can be divided into two discrete phases. Firstly between 1960 and 1972 in the eastern regions of the country, which is dominated by non-indigenous Ladino populations who rebelled, but ultimately failed despite large-scale funding from the U.S. Secondly in the western region from 1972 to 1996, the predominantly Maya indigenous ethnicity took up the cause of resistance to authoritarian military dictatorship. It was during this second phase, in 1980, when guerrilla fighters united under the Guatemalan National Revolutionary Unity (URNG). The Guatemalan military launched a brutal, scorched-earth and genocidal counter-insurgency campaign from 1981 to 1982, characterising all indigenous populations as threatening subversives. Initially, the Guatemalan military received economic and military support from the U.S., however, with increasing domestic American opposition to the conflict, Israel was used as an intermediary of such support. Interestingly, Clinton later apologises for the actions of the US in Guatemala. The conflict once again transformed from 1985 with the transition towards democracy to its conclusion in 1996 through the signing of the Peace Accords. The war had taken the lives of 200,000 Guatemalans, while leaving a further 1 million as displaced refugees; the majority of which sought safety in the United States as ‘political refugees.’ 

During the 1970s El Salvador experienced a similar quashing of democratic movements through military intervention. Journeying back to 1932, the Peasant Uprising led by Farabundo Marti was ruthlessly suppressed by the state, preventing the emergence of a mass revolution, while also crippling any potential labour unionisation or activism as Marti became a martyr figure in its aftermath. By 1972, El Salvador was a country with a population of 7 million people with dense population clusters and extreme socio-economic divergence. The poorest members of society owned just 2% of all land, with the most productive regions held by the wealthiest so-called ‘Fourteen Families,’ and were heavily dependent on volatile cash crop economic foundations. With inflation at 60% and unemployment at 30%, a coalition of opposition parties under the Union of National Opposition (UNO) led by Jose Napoleon Duarte emerged to oppose the national oligarchic candidate. Significantly, Duarte won the 1972 election by 72,000 votes, however, the electoral commission overturned the results, despite a complete lack of evidence, declared the conservative candidate Colonel Arturo Armando Molina the victor by 100,000 votes. While Duarte was arrested, tortured and exiled to Venezuela where he remained until his return to El Salvador in the late 1970s. 

In the wake of this electoral fraud, violence and decapitation of the people’s democratic initiative, peasants and urban workers, with the support of the Catholic Church, began during the mid-1970s to become more politically organised. Protests were held in the search of democratic political alternatives. In direct retaliation to these political impulses, a series of high-profile assassinations were conducted by the El Salvador army, including Archbishop Oscar Romero on March 24th 1980, while in the same year four U.S. religious aid workers were raped and murdered by the military. Then, in 1981, the Farabundo Martí National Liberation Front (FMLN) was founded, uniting opposition into a popular front with a broad set of political principles. However, by the mid-1980s, in order to prevent the infiltration of communism in Latin America, the U.S. government provided approximately $200 million in military support to the El Salvadoran military dictatorship. Crucially, this culminated in a twelve-month blood bath from 1980-1, in which military ‘death squads’ murdered an estimated 30,000 civilians. 

This period of military violence, hunger and political repression catalysed the refugee and migrant crisis which engulfed El Salvador. Most migrants entered the US illegally; primarily settling in Los Angeles. For example, by the mid-1980s, an estimated 500,000 El Salvadorans had migrated to the United States. Furthermore, in 1984 the Department of Health reported that disease rates amongst these migrant communities was extremely high; along with mental health problems related to violent displacement from home territories. The crisis in El Salvador elicited a domestic American response, namely protests towards support for Latin American violent military dictatorships. 

Peace was brought to Guatemala in 1996 and El Salvador in 1992. In the former, the URNG was disarmed and the state was declared responsible for the vast majority of human rights violations, with only 1% of the over 200,000 deaths attributed to guerrilla actions. Likewise, in the latter, the FMLN was disarmed, the United States provided $4 billion in aid to support El Salvadoran recovery, who were equally criticised globally for their complicit involvement in the facilitation of such brutal military dictatorships. 

In post-Peace Central and Latin America, governments have pursued the promotion of free-trade economic zones, where transnational corporations can invest in order to instigate greater socio-economic development. However, there has been a heavy emphasis on textile production, with criticism levelled at these states for the emergence of exploitative conditions, along with the high mobility of capital, as well as rising levels of poverty, crime and corruption – not to mention the invigorated international drug trade and emergence of Narco-states such as Colombia and Mexico, dominated by cartels. These various factors have produced new waves of migration both to North America as well as globally in the decades since peace. 


Carothers, Thomas. In the Name of Democracy: U.S. Policy Toward Latin America in the Reagan Years. University of California Press, 1993. 

Long, Tom. Latin America Confronts the United States: Asymmetry and Influence. Cambridge University Press, 2015).

McClintock, Michael. The American Connection: State Terror and Popular Resistance in Guatemala. Zed Books, 1985. 

McClintock, Michael. Instruments of statecraft: U.S. guerrilla warfare, counterinsurgency, and counter-terrorism, 1940–1990. Pantheon Books, 1992. 

Menjivar, Cecilia, and Nestor Rodriguez, eds. When States Kill: Latin America, the U.S., and Technologies of Terror. University of Texas Press, 2005.Palmer, David Scott. U.S. Relations with Latin America during the Clinton Years: Opportunities Lost or Opportunities Squandered?, 2006.

Civil War and United States Humanitarianism in Nigeria

Written by Jack Bennett.

Humanitarian intervention has become an accepted part of international relations, with global current affairs and news headlines from the Balkans in the 1990s to the current crisis in Syria and the Middle East. The origins of humanitarianism can be traced back to the Civil War which erupted in Nigeria in the decades following decolonisation and independence. 

During the 1960s, Nigeria was of huge importance to Africa. Population estimates suggest that 20% of the entire continent resided in the country. From the perspective of the United States and the Kennedy administration, Nigeria was seen as stable, moderate and in support of western ideological values in this Cold War geo-political climate. For instance, Edward Hamilton, who served in both Kennedy and Johnson’s administrations saw in Nigeria hope for Africa as a whole. However, underneath the surface of Nigeria, there were ethnic tensions which simmered throughout the 1960s, that boiled over in 1966 with two coups that rocked the country. Tensions were unresolved, leading to the secession of Biafra on May 30th 1967. Interestingly, this conflict seemingly fell under the radar of international politics, with Charles L. Sanders of the magazine Jet contemporaneously declaring it a war no one cared about. Nevertheless, Biafrans attempted to elicit international support for their cause through the global press and public relations initiatives, but to no avail. Even propagandistic journalism was endorsed to generate international attention for Biafra but this once again went unnoticed. For example, the British television journalist Allen Hart saw his own visit to Biafra as a waste of time, stating that we saw nothing worth seeing. Exhausted and frustrated, prior to departure back to Lisbon, the Irish Catholic priest Kevin Donaghy revealed to Hart the true extent of human tragedy gripping Nigeria. It was through this journey into the bush that Hart said: “it was through the holy ghost’s father that I was introduced to the reality and the horror and the nightmare of Biafra.” The scene that met Hart was one of starvation, malnutrition, disease and death – on an unprecedented scale, affecting millions during this time of conflict. 

The response of the United States to this humanitarian crisis came in the form of two levels. Firstly, that of the general population, with the formation of over two hundred organisations across the country, including: the actions of school students; and national initiatives such as the American Committee to Keep Biafra Alive – formed in New York City by ex-Peace Corps Volunteers and students. Secondly, the U.S. government under President Johnson’s administration maintained a position of non-involvement. This humanitarian crisis, however, was to change everything. In response to domestic pressure, the Johnson administration sold eight relief airplanes to humanitarian organisations in December 1968. Edward Hamilton, in fact, wrote to the then Secretary of State Dean Rusk declaring that, because of the humanitarian crisis the slate had been wiped clean, allowing for greater U.S. involvement. By January 1969, Richard Nixon became President who was seen as sympathetic to the Biafra cause, declaring it an act of genocide during his election campaign. Upon assuming office however, Nixon assumed the same strategy as Johnson, separating politics from relief and the war from humanitarianism. This was most clearly manifested through the appointment of Clarence Ferguson as a special coordinator for relief to civilian victims of the Nigerian Civil War, who came to represent the duality of public humanitarian support and U.S. foreign policy objectives. 

The impact of the United States on humanitarianism is an area of continual debate. With perspectives ranging from the critical, declaring it an act of neo-colonial interventionism, to the more optimistic view of actions pertaining to greater political, economic and social stability and security. During the Nigerian Civil War, the U.S. provided more aid than any other country, accounting for around 75% of the total in terms of funding and tonnage of food sent to Biafra. At the same time, the problem of sovereignty was raised, and the issue of providing humanitarian relief against the will of the Nigerian government and the Biafran government. This revealed that politics could not in fact be separated from relief but were intimately interwoven, a gordian knot which had to be dealt with simultaneously. Furthermore, events took over that further impeded efforts to unravel this complex situation. In early 1969, Ferguson felt he was making clear progress to resolving this problem, but in May of that year a Swedish Count conducted a bombing mission on Nigeria that fundamentally altered the humanitarian landscape in Nigeria. 

Carl Gustaf von Rosen was of aristocratic descent, he led efforts to support the Jews during the Second World War; and fought against the Italians in support of Ethiopia during the invasion of Abyssinia in the 1930s. But during the Nigerian Civil War in the 1960s he truly rose to prominence. First in support of humanitarian relief, von Rosen soon realised that this was not enough, instead he pursued actively supporting the creation of an independent Biafran state and it was this that led him to enact a bombing campaign of the Nigerian army in 1969. In response, Nigeria retaliated quickly, furiously and violently; not making any distinction between the militarised efforts of von Rosen and the relief initiatives of the Red Cross. One month after the attack, the Nigerian army shot down a Red Cross plane. This brought a major difficulty to the desk of Richard Nixon; as the United States principal partner in humanitarianism was the Red Cross. Concomitantly, the Red Cross was facing its own challenges at this time. Divided into two separate factions, with one supporting revolutionary humanitarianism – providing relief support regardless of the diplomatic position of the Nigerian government; and the other upholding the principles of the Red Cross in respecting state sovereignty. Nixon, therefore, was in a dual position of reconciliation, both between the Red Cross factions and with the Red Cross and Nigeria itself. The task fell to Ferguson to coordinate this agreement in order to allow humanitarian relief operations to continue. However, this was never accomplished, so from June 1969 through to the end of the war in January 1970, the Red Cross ceased operation, with the war concluding with an established agreement. 

The Nigerian Civil War and humanitarian intervention left important legacies for similar international actions during the remainder of the twentieth century. Humanitarianism is founded upon three central principles: impartiality, independence and neutrality. These founding tenants can be seen to have been manipulated and digressed throughout the history of global relief efforts. The very notion of independence from nation states is skewed by the reliance of these non-governmental organisations on diplomatic donations and economic support. In particular the clear inextricability of the U.S. government from the American Red Cross during this period. All three principles were challenged during the Nigerian Civil War, forcing humanitarian organisations into a process of self-assessment and respect for state sovereignty. During the Civil War, the Red Cross declared that it would not deliver aid to Biafra without an agreement with the Nigerian government which permitted such intervention. Prior to the Civil War, in a number of different relief episodes around the world, state sovereignty had been breached. However, the Nigerian Civil War spotlighted these potential transgressions more forcefully. Biafra brought interventionist humanitarianism into sharp focus, within the landscape of post-colonial politics, producing a greater consensus on the requirement for intervention in a time of crisis. Today’s humanitarian right to protect is an outgrowth of the controversy surrounding the Nigerian Civil War of the 1960s. Similarly, this change in approach is demonstrated through the organisations which emerged in the wake of the Nigerian Civil War; in particular, the Médecins Sans Frontiers (MSF) in 1971. This organisation grew from the French doctors present in Biafra and the experiences they had, as well as further experiences in Bangladesh in 1971, becoming an entity which ignored all pretenses of neutrality, not recognising independence or impartiality at a time of crisis, rather bearing witness and speaking out on those who perpetrated such crises. 

The response of states to humanitarian action is of huge interest and assumed a number of facets in the wake of the Nigerian Civil War. The response of the United States illuminates the changing circumstances of the 1960s and the use of humanitarianism, under successive administrations. What is revealed is not an international project, but a domestic one. One in which humanitarianism was used not to fulfil international aims but to meet the domestic demands, pressures and movements. Additionally, secessionist movements and movements within developing countries and their utilisation of humanitarian aid is extremely enlightening. For example, Gourevitch explored how humanitarian crises became a way to legitimise struggles and the use of aid to then gain international support for these causes; something which most clearly took place during the Nigerian Civil War. Biafra, which failed to amass international support independently at the outset of the conflict, only achieved greater recognition after it was declared a humanitarian crisis by the international community. Therefore, humanitarianism, in the aftermath of Nigeria has since become a means of political legitimation.


Draper, Michael I. Shadows : Airlift and Airwar in Biafra and Nigeria 1967–1970.

Heerten, Lasse; Moses, A. Dirk (2014). “The Nigeria–Biafra war: postcolonial conflict and the question of genocide”. Journal of Genocide Research. 16 (2–3): 169–203.

O’Sullivan, Kevin (2014). “Humanitarian encounters: Biafra, NGOs and imaginings of the Third World in Britain and Ireland, 1967–70”. Journal of Genocide Research. 16 (2–3): 299–315.

Stevenson, John Allen. “Capitol Gains: How Foreign Military Intervention and the Elite Quest for International Recognition Cause Mass Killing in New States”. Political science PhD dissertation, accepted at University of Chicago, December 2014.

The League Against Imperialism: Interwar Anti-Colonial Internationalism

Written by Lewis Twiby.

In 1955, 29 newly independent states in Asia and Africa met in Bandung, Indonesia in order to establish international solidarity between former colonies. The Bandung Conference was partially organised by Indonesian president Sukarno, who opened the Conference referencing a movement from around thirty years prior: 

I recall in this connection the Conference of the ‘League against Imperialism and Colonialism’ which was held in Brussels almost thirty years ago. At that Conference many distinguished Delegates who are here today met each other and found new strength in their fight for independence.

The League against Imperialism (LAI), which first met in Brussels in 1927, has often been overlooked in the history of internationalism and anti-colonialism – often it is regarded as a ‘failure’ or a front for the Comintern. The LAI was not the first international, anti-colonial movement – in 1900 the Pan-African Congress was held in London, and in 1924 and 1927 the Pan-Asian People’s Conferences were held in Nagasaki and Shanghai respectively – but it was the first attempt to create a global anti-colonial movement. Delegates met to discuss and build solidarity on a wide range of topics ranging from resisting the Italian invasion of Ethiopia, in 1935, to supporting campaigns against Jim Crow laws in the US South. Although often relegated to a footnote in history, the LAI set the stage for the internationalism of the post-war era.

The idea for an anti-colonial international came from two communists based in Berlin, Willi Muzenburg and Virendranath Chattopadhyaya (Chatto). In the early-1920s, inspired by Lenin’s Imperialism, the Highest State of Capitalism, the Third International, later known simply as the Comintern, adopted an anti-imperialist line. The Kuomintang (KMT) from China, although not a communist party, was brought into the Comintern with it being the main anti-imperialist party – this was quickly reversed when Chiang Kai-Shek massacred the party’s communist members. Muzenburg and Chatto wished to build an international movement to support anti-colonialism independent from the Comintern, something which would prove beneficial for the LAI after Stalin’s seizure of power in 1928. According the Vijad Prashad, the name ‘League against Imperialism and Colonialism’ was chosen to specifically attack the League of Nations. The disintegration of the German and Ottoman Empires following the First World War meant that their former possessions outside of Europe were handed to the victorious powers under the euphemistically named ‘mandate system’. Woodrow Wilson’s call for national ‘self-determination’ was only applied to Europeans, not the colonised.

  Meanwhile, the LAI needed to hold their conference somewhere, and here we see the paradox in pre-war internationalism. While post-war international conferences were held in the former colonised world – Bandung (1955), Accra (1957), Dar es Salaam (1974) etc. – the majority of anti-colonial conferences were held in the metropoles. This is reflected in the three cities shortlisted for the first LAI conference – Berlin, Paris, and Brussels. A key factor in why conferences were held in Europe, Japan, or the US was merely a practical one. As described by Sukarno in 1955, ‘It was not assembled there by choice, but by necessity’. Draconian restrictions of movement in the colonies limited the ability of nationalists to travel and build solidarity movements, but these limitations were not as strict within the metropole. Furthermore, many nationalists were in the metropole for education, or missions to raise awareness of the plight of the colonised. Jawaharlal Nehru of the Indian National Congress, and the future first prime minister of India, became one of the founders of the LAI as he coincidentally was in Europe raising support for Indian independence.

One of the key flaws of the LAI was its reliance on European support. Berlin and Paris both barred the LAI’s first conference being held – for Berlin due to Muzenburg’s links to the Comintern and Paris fearing that it could inspire revolt among its colonies. Several delegates from Britain were even detained upon arrival in Belgium, so much was the LAI at the whim of Europeans. Nevertheless, an international delegation managed to arrive in Brussels in February 1927. Prashad has shown why Brussels was chosen – the brutal exploitation of the Congo under Leopold II had highlighted the violence of colonial rule, and had sparked an international movement against Leopold’s personal fiefdom. Delegates came from across the world including Nehru (India), Albert Einstein, Rosamond Soong Ching-ling, the widow of revolutionary Sun Yat-sen (China), Sukarno (Indonesia), Lamine Senghor (Senegal), Independent Labour Party MP Fenner Brockway, and Frida Kahlo’s husband Diego Rivera (Mexico). Although closely aligned to the Comintern, and many communists did attend, non-socialist nationalists did attend – such as the African National Congress from South Africa, a still fairly elite party in the 1920s.

Such a wide range of delegates meant that the LAI covered many issues facing colonised peoples across the world, some of which had largely been ignored by anti-imperialist movements. For example, Latin American delegates, inspired by the words of Argentina’s Bautista Justo denouncing his country being ‘reduced to the status of a British colony’ in 1896, managed to get the LAI to support Latin America’s resistance to British and American exploitation. As Latin America had won its independence in the early-nineteenth century, it had largely been disregarded in solidarity for this reason. Most famously, the LAI threw its support behind those accused in the Meerut Conspiracy Case, with its intersection of labour and colonial rights it symbolised the Leagues raison d’être. In 1929 several trade unionists in Meerut were arrested by British authorities under the charges that they were working with the Soviet Union to overthrow British rule in a socialist revolution. Through the LAI protestors campaigned for their acquittal in Britain, and built ties with the Communist Party of India and Indian lawyers in order to defend the accused. The case lasted until 1933, and the accused were initially found guilty, but the convictions were later overturned.

However, the LAI was constantly dogged by controversies. Centre-left parties, including in an ironic twist of history Karl Marx’s grandson, denounced the League as a front for the Comintern. The significant presence of communists meant that the LAI was tied to the Comintern, but the accusation of it being a front, which still is present in the historiography, ignores the agency of those who took part in the LAI. Furthermore, this further ignores the deep divide between the LAI and Comintern during the 1930s. With the rise of fascism, and Stalin’s paranoia, creating a distinct shift in policy. Communist parties in Europe were told to doggedly adhere to Soviet demands as Stalin aggressively implemented his ‘socialism in one country’ policy. Many communists would be murdered upon visiting, or living in, Moscow as Stalin wanted firm commitment to protecting the Soviet Union – any sign of independence was met with expulsion from the Comintern, which happened to Muzenburg, or execution, which happened to Chatto. Priyamvada Gopal has highlighted how Stalin shifted the Comintern’s policy of anti-imperialism to only focus on the ‘fascist empires’ of Italy and Japan, and to ignore ‘democratic empires’ like Britain and France. The LAI naturally loathed this policy – Trinidadian Marxist George Padmore pointed out how the USSR allowed ‘colonial fascism’ as it did not directly threaten Stalin’s power.

The 1930s would spell the end of the LAI, although it managed to last until 1936. Due to power imbalances the LAI was reliant on organising in Europe, and this severely limited the League’s ability to operate. Britain, France, and the Netherlands, in particular, were particularly keen to curb the League’s ability to run within their borders, due to the number of delegates which came from their colonies. Ties to anti-colonial movements within the metropole were further seen as a Bolshevik plot – colonised peoples, and their allies, were seen as being unable to operate without orders from Moscow. Consequently, Muzenburg and Chatto largely based the League in Germany, but the rise of the Nazis wiped out the German Left by 1934. Muzenburg fled to France and Chatto to Moscow, and died in 1937. Muzenburg would also be expelled by the Comintern, and would be found dead during the German invasion of France. Furthermore, two major events firmly undermined the League’s ability to construct anti-colonial solidarity. Chiang Kai-shek’s destruction of the Canton Commune in December 1927 caused the KMT to be expelled, but, without a major Chinese ally, the LAI was unable to build support against Japan’s invasion of Manchuria in 1931. Palestine would also hurt the League. Arab nationalists, the labour Zionist group Poale Zion, and the Communist Party of Palestine (PCP) bickered over the Palestinian mandate – nationalists opposed Zionism, Zionists wanted a Jewish homeland, and the PCP, under Daniel Averbach, saw nationalism and Zionism as tools for Britain to divide the people. The LAI eventually voted to eject Poale Zion, but as a consequence was unable to prevent the sectarian violence which tore apart Palestine throughout the 1930s and 1940s.

The decline of the LAI after only a decade has often led to it being called an abject failure. While the LAI did not have a tangible impact on anti-colonial internationalism, its existence was what made it important. It was a first attempt at creating solidarity across continents – LAI supporters in London in 1931 took over Trafalgar Square in solidarity with those accused in the Meerut Case. This is keenly shown with those who attended. Lamine Senghor said that the existence of the League spoke louder than anything else, ‘But beware, Europe! Those who have slept long will not go back to sleep when they wake!’ The seeds of post-war internationalism had their roots with the League against Imperialism.


Belogurova, A., ‘Networks, Parties, and the “Oppressed Nations”: The Comintern and Chinese Communists Overseas, 1926–1935’, Cross-Currents: East Asian History and Culture Review, 24, (2017), 61-82.

Gopal, P., Insurgent Empire: Anticolonial Resistance and British Dissent, (London: 2019). 

‘Imperialism, the Highest Stage of Capitalism’, in Lenin, V., Selected Works, (Moscow: 1963), 667-766.

Louro, M., ‘“National Revolutionary Ends and Communist Begins”: The League against Imperialism and the Meerut Conspiracy Case’, Comparative Studies of South Asia, Africa and the Middle East, 33:3, (2013), 331-344.

Petersson, F., ‘Hub of the Anti-Imperialist Movement: The League against Imperialism and Berlin, 1927-1933’, International Journal of Postcolonial Studies, 16:1, (2013), 49-71.Prashad, V., The Darker Nations: A People’s History of the Third World, (New York: 2008).

Image: https://imperialglobalexeter.com/2014/10/20/prelude-to-bandung-the-interwar-origins-of-anti-colonialism/

Bitter Weed: Tea, Empire and Everyday Luxury

Written by Jack Bennett.

Unprecedented unrest erupted in Boston on December 16, 1773 when the Sons of Liberty protested the increasing British taxes by disposing of 342 tea chests with a value of $1 million into the harbour. The Boston Tea Party of 1773 became a pivotal event in the history of a nation and an empire, reverberating across the globe. At the centre of these transformative changes was a humble commodity: tea. John Adams described the act of rebellion which sparked the American Revolutionary war as: ‘the most magnificent Movement of all…This Destruction of the Tea is so bold, so daring, so firm, intrepid and inflexible, and it must have so important Consequences and so lasting, that I cannot but consider it as an Epocha in History.’ This reveals a fluidity and development of national identity, through American conspicuous non-consumption as a political and mercantilist protest. The global production, trade and consumption of tea funded wars, fuelled colonisation, instigated political rebellion and defined cultural refinement – the effects of which are with us even today.

Tea was both a product and driver of global connections. Economic profitability determined British imperial expansionism and the manipulation of ecological space, social hierarchies and labour systems. Cultural cartographies of conquest developed alongside the practice of mapping of exoticism. The Canton system (1757-1834) in China imposed strict regulations on foreign trade, which catalysed British colonisation in order to gain a hegemony on the global trade of tea. This was conducted through the British East India Company, which ruthlessly extended its military-economic influence into India with a vengeance. This private company control of India provided the ideal landscape for the implementation of plantation agriculture using native, indentured labour, producing both a socially and ecologically exploitative economic system. Cultures of discovery, success and signification emerged with tea acquiring new layers of cultural, political and economic characteristics in local contexts. 

During the 1880s, there emerged a proto-mass consumer society throughout the British Empire, Europe and the United States related to advertisement for tea. These advertisements conveyed British imperial nationalism and the ‘orientalism’ of Indian plantation labour, emphasising the British Raj as ‘our Asia’ through reducing the exoticism of tea and domesticating tea from the Empire.  This happened while erasing the origins of tea creating a national identity through the process of naturalisation. This marked a significant shift in the representation of tea production, shifting from Chinese origins to Indian, informed by racial hierarchies which emphasised Indian purity and Chinese inferiority. Importantly, advertisements expressed individual corporations, conflict of interests and commercial imperial prosperity in the developing proto-consumer societies and rapidly globalising economic market, dominated by imperial powers in the nineteenth century. 

Tea became a symbol of civilisation and domesticity. As the moral anti-tea indignation of the eighteenth century gave way moving into the nineteenth, and Britain’s global economic ascendancy, tea suffused social structures, becoming synonymous with progress and order; civility and industriousness. Markman argues that there was a transition from “oriental exoticism to Victorian domestication” regarding tea production and consumption. Demand for tea began to cross into the middle and urban working classes from the eighteenth through to the nineteenth centuries. Critically, this reveals a simultaneous reinforcement between the cultural economics of consumption and the impetus for creating and expanding mercantile and administrative impetus for Indian colonialism. Consequently, tea determined the very concept of luxury in everyday life in Europe as it experienced proto-globalisation, as imports and production increased under British imperial control the accessibility of tea within the lower classes flourished. Class anxieties surrounding tea, however, became prolific, as the personal sphere became progressively interconnected with the public and political. This incorporation of tea in the quotidian created a delineation of labour and leisure, domesticating exotic wildness. Tea, therefore, reveals a fluidity across class boundaries, a developing social universality as a vehicle of national character and apparatus of social routinisation. 

Rituals of tea consumption vitalised emerging conceptualisations of femininity and leisure. Crucially, a private sphere of female influence emerged defined by Chatterjee as ‘new interior worlds.’ Gendering tea as a consumptive feminine product brings into question the integrity of domestic and family life in early modern Europe, the United States and across Empires. Intriguingly in tea’s geography of origin, China, the commodity was ironically an agent of male elite sociability, compared to the female-orientation and fashionability of the commodity in Europe and the United States. Nevertheless, the associated ceramics with tea contributed to the defining of social status, reinforcing the femininity of tea. For instance, this was manifested in the literary and popular cultural ‘scandal around the tea table,’ something inherently negative during the eighteenth century. Meanwhile in Japan the tea ceremony was reinvigorated after the Meiji Restoration in 1868 and became culturally valued, encapsulating the spirit and tradition of Japan in a nationalised, modernising project and global imperial assimilation. By foregrounding the interconnection between gender and tea, women assume an intrinsic role in the development and fluctuations of imperial trajectories. Fundamentally, tea encapsulated increasing modern consuming pleasures of discovery. 

Europeans adopted, appropriated, and altered Asian tea culture in order to construct an expansive consumer demand for tea in Britain and other modernising global economies, along with an intensive plantation system of agriculture and labour in Asian and African colonial territories. Development of class power structures in relation to tea consumption were distinctive, with new fashionability embroiled in discourse regarding the exploitative nature of empire. From international advertising to political lobbying, the historical emergence of tea as a commodity of global connections underpins modern frameworks of political, public and international economic interconnection. Tea is ultimately a complex social and cultural commodity, reflective of particular contexts yet interwoven with global flows; informing conflicts and national imperial identity. 


Image: Nathaniel Currier, The Destruction of Tea at Boston Harbor (1846). 

Chatterjee, Piya.  A Time for Tea: Women, Labour and Post-colonial politics on an Indian Plantation, Durham, NC: Duke University Press, 2001. 

Ellis, Richard Coulton & Matthew Mauger, Empire of Tea: The Asian Leaf that Conquered the World. London: Reaktion, 2015. 

Ramamurthy, Anandi. Imperial Persuaders: Images of Africa and Asia in British Advertising. Manchester: Manchester University Press, 2003. 

Red Dawn Rising: Global Communism, Anti-Colonialism and Freedom in India

Written by Jack Bennett.

The twentieth century in the shadow of the 1917 Russian Revolution became a century of communist revolution, conflict and collapse. From the formation of Communist China in 1949 under Mao Zedong, following a long conflict with nationalist forces; Castro taking control of Cuba in 1959, providing communism with a foothold in the western hemisphere; to the fall of Saigon in 1975 marking the high-tide mark for communism globally. By focusing on the emergence of communism in India, in relation to anti-colonial independence movements during the first half of the twentieth century, both indigenous and global currents are revealed which produced international conservations and deep engagement across state structure, operating in transnational networks.   

The common cause and solidarity between subordinated groups fighting for self-determination and independence from dominant international powers reveals the correlation between international communism and transcolonial anti-colonialism. Kris Manjapra argues that this highlights the ecumenicalism of communism during this period. Traditionally, historiography regards Indian communism as intrinsically related to the USSR’s economic and intellectual support. Manjapra, rather, posits that communism provided a collection of symbols for interpretation, creating a global intellectual melting pot. This effectively breaks down the dichotomy between global and local political and social revolutionary impulses. 

The Communist Party of India was established in 1925, having been initially composed in exile in 1918 under M.N. Roy, influenced by Communism experienced in Mexico in 1917. M.N. Roy advocated the international universalism of working-class struggles rather than national concerns. Alonso analyses Roy’s political work in Mexico from 1918 to 1920, arguing that the intellectual revolutionary climate of Mexico not only saw Roy become a communist, but he and his colleagues, concerned with the universal struggle of the working class, dismissed ideas about national identity brewing during the Mexican Revolution and its aftermath.  Attending the second Congress of the Third International in Moscow in 1919, Roy came into conflict with Lenin over the attitude that the Communists should have to the Indian National Congress. This left Indian Communism in disarray during the 1920s, revealed by the ability of the British to conduct a show-trial, the Kanpur Conspiracy Case in 1924. It was not until the 1930s, with the dissolution of Gandhi’s campaign of civil disobedience that Communism truly took hold and became a political force in India. From its inception, the nucleus of communism in India was incapable of exploiting the currents of revolutionary sentiment and the crisis that enveloped India following the First World War. 

International communism during the 1920s, it is important to emphasise, was not in fact a single monolith, but composed of greater fracture and controversy, that influenced global colonial politics. This provides a cosmopolitan vision of South Asia moving beyond geographic borders. International communism fundamentally becomes a transnational intellectual concept, but one focused less on universalism than on the spaces between ideas that operate on local, national and global scales. Manjapra sees in Roy’s work a form of cosmopolitanism that sought both ‘autonomy and solidarity’. In 1922, Roy penned India in Transition, bridging and amalgamating Russian, German and Bengali discourses about temporal progression and eruptive change within a global context of post-war avant-gardism. Illuminating Roy’s belief in a ‘radical notion of solidarity that aimed at radical identification – not just affiliation – with other liberation projects worldwide’. Roy did not see a contradiction in championing independence and interdependence, swaraj and solidarity within a larger global community. 

Between 1919 and 1925, WES in Berlin provided a de-centralised, metropole upheld by Soviet communicative and organisational infrastructure, providing political legitimation, and becoming a laboratory for anti-colonial, revolutionary experimentation and coalescence of independent movements. For example, 400 Indians utilised the network from 1921 to 1923; the League Against Imperialism and the Indian Independence League were established in 1928, while Indian communist envoys were sent to China during the Civil War in 1927. In response to this developing Indian communist revolutionary landscape, Britain launched a global campaign of counter-insurgency in order to eliminate anti-colonial radicalism. However, as the USSR dissolved international involvement under Stalinist authoritarianism from 1928, with global meetings banned in 1935, there was a reconfiguration and new direction towards both Indian communism and anti-colonial internationalism, in a process of sustained recognition, negotiation and competition with state nationalism. 

Revolutionary activist undercurrents existed in India before the Russian Revolution of 1917. The Swadeshi movement from 1905 had roots in Bengal but extended overseas where communities of Indian radicals attempted to court German support during the First World War. In the United States, a group of Indian expatriates, predominantly labourers and students raised funds and organized in support of the Ghadar movement. At this time the nationalist movement was on the upswing, starting from a campaign against a British decision in 1905 to partition the province of Bengal. This was seen as a move to break up a strong centre of opposition to British rule, as the new province of East Bengal would be dominated by the collaborationist Muslim aristocracy. The movement was soon extended into the Swadeshi campaign, which was a campaign to boycott British goods in favour of Indian ones. This generated Indian capitalist support for the National Congress. Nevertheless, this became the first moment of many in which Indian workers demonstrated large-scale, collective action. Strikes occurred on the railways and in the Punjab and in the textile mills in Bombay, as well as in many small workplaces. These were determined by self-organisation rather than union powers. However, this movement collapsed in the face of factional splits inside the Congress and British imperial repression. 

Imposed British control after 1908 was undermined by the First World War. The tussle between imperial powers encouraged nationalist organisation, this time through the Home Rule League, inspired by Irish nationalism. The reactive constitutional reform introduced by the British in 1917 failed to fulfil the Indian demand for self-governance, illustrating the shortcomings of imperial powers in upholding the vision of self-determination. This coincided with the rise of the working class within India. In 1918 and 1919 there were strikes in almost all the Bombay textile mills, the heart of Indian industry at that time. In the first six months of 1921 a series of strikes against the Rowlatt Act, a new piece of oppressive legislation introduced by the British, involved over 1.5 million workers, leading to the All-India Trade Union Congress (AITUC) in October 1920. Despite these advances, the demoralisation and defeat of the Ahmedabad textile workers strike of 1919 demonstrates the limited impact of the Russian Revolution in India, due to the dominance of the labour movement with bourgeois nationalists from the Indian National Congress. This reveals the disjunct between ideals and reality: the Russian Revolution’s democratic pursuit of dismantling autocracy and freedom; came into conflict with Indian commitments to capitalism. For example, Gandhi, with his non-violent reactionary religious fanaticism and anti-industrialist stance, ensured worker and peasant support for nationalism while maintaining the Indian bourgeoise, in order to prevent a socialist movement. Thus, indigenous influences of anti-colonialism, prior to the Russian Revolution of 1917, provided the foundations for the transplantation of communism. 

Communism would indeed gain a foothold in India. Producing distinctive leaders, labour unions and mobilisation and political parties. But revolution of the oppressed did not rise in India, communism was not the determinant in the dissolution of British imperial power. It became a movement of internal factionalism and nationalist domination. Freedom was defined by struggle against imperialism, indigenous dissatisfaction with Gandhism and global revolutionary currents, but the internal authorities of Indian remained unwilling to completely rid the country of imperial, industrial and economic systems in the pursuit of independence. 


Barry Pavier, ‘India and the Russian Revolution’ International Socialism 103 (November 1977): 24-26. 

Isabel Huacuja Alonso (2017) M.N. Roy and the Mexican Revolution: How a Militant Indian Nationalist Became an International Communist 

Kris Manjapra, M.N.Roy: Marxism and Colonial Cosmopolitanism (Delhi: Routledge India, 2010), chapter 2, pp. 31-62.

Kris Manjapra, ‘Communist Internationalism and Transcolonial Recognition’ in Cosmopolitan Thought Zones: South Asia and the Global Circulation of Ideas, ed. Sugata Bose and Kris Manjapra (Basingstoke: Palgrave Macmillan, 2010), pp. 159-77

Image of Indian communist, Manabendra Nath Roy with Lenin, Moscow 1920. 

Modernisation Theory: Challenging British Exceptionalism and the Unilinear Model

Written by Ella Raphael. 

Modernisation Theory refers to a model of societal transition, originally meaning the movement from a ‘traditional’ society to an ‘advanced’ society. Since the seventies it has been a topic of contentious debate. Revisionists have challenged traditional theorists, such as Walt Whitman Rostow and Marion Levy, and have criticised their narrow rubric of modernity, which has been based on Britain’s economic success in the Industrial Revolution. Recent historiography has raised the following questions: what constitutes modernity, who’s rubric are we following and why? The new wave of debate has reiterated the benefits of extending the theory, incorporating the ideas of multiple modernities and economic efflorescences, for example. Despite this, moving forward, historians must navigate the risk of making the theory too broad, and must ensure it maintains its sense of structural cohesion. 

Rostow’s 1960 theory of modernisation was incredibly influential, yet vastly criticised. He argues in The Stages of Economic Growth: A non communist manifesto that all societies go through five stages of development, starting in a Pre-Newtonian state and ending in an age of mass consumption. He uses industrialising Britain as a recipe for economic success for “developing” countries. He does not hide his political agenda, explicitly calling his model an “anti-communist manifesto” stating that the Soviet Union can achieve modernisation once it abandons the marxist model of development. Levy, another early theorist writing in the sixties, argued that as the level of modernisation increases, so does the structural uniformity among societies. Since the revival of modernisation theory in the nineties this unilinear model has been adapted. 

The two main criticisms of Rostow’s model of modernisation are that it is eurocentric and teleological. Historians now recognise that there are multiple paths to “modernity”, which in this case means economic maturity. Jack Goldstone argues that within modernisation theory there is too much emphasis on the heroic narrative of the “Rise of the West”. He argues that many early modern and non-European societies experienced “efflorescences” of economic growth, and steady increases in technological change. He also challenges British exceptionalism by arguing that Britain’s industrial success was a historical anomaly which resulted from the lucky conjuncture of an economic efflorescence and a growing culture of engineering. His research on Qing China and Golden Age Holland weakens the unilinear vision, and proves that countries do not all follow the same economic trajectory. 

The enlightenment ideology that every society inevitably develops a similar set of ideas, customs and institutions is a simplification. Adam Smith, an early influencer on modernisation theory, used the framework of the ‘Four Stage Theory’ to suggest that given enough time, every society would converge towards one homogenous form. Sanjay Subrahmanyam, David Porter and Joseph Fletcher address the issues with comparative exercises. There is a tendency to use categories derived from a European experience, and these then shape the questions comparative historians ask. Western modernity has been given a privileged position, and has been set as the benchmark against which all other societies are inferior. Condorcet, another enlightenment philosopher epitomises this Eurocentric idea; he believed that the rest of the world could look to Western European societies and see its future. The rubric of modernity is rooted in the idea of European superiority. As Matthew Lauzon argues, this rigid outlook has provided a “theoretical justification for European cultural and imperial hegemony”. The teleology is problematic because it puts modern European society as the pinnacle of civilisation and human development. 

Additionally, historians now recognise that as well as many paths to modernity, there are many different destinations too. Shmuel Eisenstadt’s concept of “multiple modernities” has helped discredit the notion that modernisation is synonymous with Westernisation. He argues that forms of modernisation across societies are not homogenous because of their varied cultural and historical backgrounds. He looks at fundamentalism and argues that this should be seen as an alternative branch of modernisation rather than as a traditionalist form of governance. He says ‘the distinct visions of fundamentalist movements have been formulated in terms common to the discourse of modernity; they have attempted to appropriate modernity on their own terms.’ Tu supports this through his idea of “Confucian” modernity, in Japan for example. He states that East Asian modernity focuses more on soft authoritarianism, paternalistic polity and government leadership in the market economy. Both Tu and Eisenstadt prove that modernity is not uniform and is also not just derived from Western Europe. Although this adaptation succeeds in addressing the teleology and Eurocentrism of the original theory, there is the risk of it becoming too broad, losing its core meaning and thus being made redundant. Volker Schmidt urges multiple modernists to create a core meaning of the term, so that their claim can be appropriately measured. Nevertheless, the idea provides a potential framework through which future historians can compare levels of development. 

The adaptations made to modernisation theory have helped redefine what ‘modernity’ means. They have brought into question whose rubric we chose to follow and they have helped us understand alternative economic and political trajectories. Eisenstadt’s multiple modernities theory and Goldstone’s concept of economic efflorescences have challenged British exceptionalism and Rostow’s unilinear model. Nevertheless, the adaptations are not perfect and pose a new set of methodological issues. It is now the task of current historians to create a standardised, core meaning of modernisation, in order to fully assess whether societies have reached this stage. 


Eisenstadt, Shmuel N. & Schlechter, Wolfgang (eds.), Daedalus 127.3 (1998), special issue: ‘Early Modernities’. 

Fletcher, Joseph, ‘Integrative History: Parallels and Interconnections in the Early Modern Period, 1500-1800’, in Studies on Chinese and Islamic Inner Asia (Aldershot: Ashgate, 1995) pp.1-35

Goldstone, Jack A., ’Efflorescences and Economic Growth in World History: Rethinking the “Rise of the West” and the Industrial Revolution’, Journal of World History 13, no. 2 (2002): 323-389. 

Goldstone, Jack, ‘The Problem of the “Early Modern” World’, Journal of the Economic and Social History of the Orient, 41.3 (1998), 252. 

Hoff, Karla, ’Paths of Institutional Development: A View from Economic History’, World Bank Research Observer 18, no. 2 (2003): 205-226. 

Hout, Wil. “Classical Approaches to Development: Modernisation and Dependency.” 2016. 

Lauzon, Mathew ‘Modernity’, in The Oxford Handbook of World History, ed. Jerry H. Bentley (Oxford, 2011), pp. 72- 84. 

Levy, Marion J. Jr. 1966. Modernization and the Structure of Society. Princeton, NJ: Princeton University Press. 

Marsh, Robert M.,’Modernisation Theory, Then and Now’, Comparative Sociology 13 (2014): 261– 283. 

Porter, David (ed.), Comparative Early Modernities (New York, 2012) 2. 

Rostow, Walt Whitman, The Stages of Economic Growth: A non-communist manifesto, 3rd ed. (Cambridge: Cambridge University Press, 1991), 4-16 (Chapter 2: ‘The Five Stages of Growth – A Summary’). ebook 

Schmidt, V. 2010. “Modernity and Diversity.” Social Science Information, 49: 511–538. 

Scott, Hamish ‘Introduction: Early Modern Europe and the Idea of Early Modernity’ in idem (ed.), The Oxford Handbook of Early Modern History, 1350-1740. (Oxford, 2015) pp. 1-34. 

Subrahmanyam, Sanjay, ‘Connected Histories: Notes towards a Reconfiguration of Early Modern Eurasia’, Modern Asian Studies, 31.3 (1997), pp. 735-762. 

Tu, Wei-ming.,“Multiple Modernities” In K. Pohl eds. Chinese Ethics in a Global Context ( Leiden, 2002), pp.63. Walker, Garthine, ‘Modernization’ in eadem (ed.), Writing Early Modern History (London, 2005) ch.2.

Image source: https://www.thoughtco.com/rostows-stages-of-growth-development-model-1434564

Teach-Out Review: Indigenous Politics and Revolutionary Movements in Latin America

Written by Anna Nicol.

In solidarity with the UCU strikes, there have been a number of organised Teach-outs which aim to create new spaces for learning and to explore alternative subject matters. In doing so they deconstruct traditional formats of learning and show that learning can take place at any time, in any format. On Tuesday 3 March, Dr Emile Chabal, the Director of the Centre for the Study of Modern and Contemporary History, organised a Teach-out led by Dr Julie Gibbings (University of Edinburgh) and Dr Nathaniel Morris (University College London). Focusing on Mexico, Guatemala and Nicaragua, Dr Gibbings and Dr Morris aimed to provide a short overview of indigenous participation in these revolutions over the twentieth century, highlighting various similarities and differences across borders and dissecting indigenous identity and affiliation within each. 

Having decided to discuss the revolutions chronologically, Dr Morris began with the Mexican Revolution which spanned from 1910 to 1920. Here, Dr Morris highlighted an important element of discussing indigenous history: historians come into contact with differing, and occasionally competing, definitions of “indigeneity”. While 80%-90% of the population in Mexico had indigenous ancestry, only 40-50% continued engaging with indigenous social structures, histories, languages, and interrogating their position in the world; therefore, focussing on indigenous revolutionary participation already presents obstacles in how we engage with and define indigenous identity itself. He argued that indigenous groups initially supported the revolution, in part as a result from pressure from landowners and the desire to reclaim their lands as well as with the aim to increase power and respect for their communities. However, Dr Morris noted that the leaders of the revolution perpetuated similar ideas and values of the old state in that they did not factor indigenous people into the “new Mexico”; instead, they aimed to solidify a population of mestizos (an individual with both Hispanic and indigenous heritage) which created fertile ground for indigenous uprisings against mestizo national versions of the revolution until 1940, when the revolution became less radical. Throughout the revolutionary transformation of Mexico, the concept of “indigeneity” closely followed the values of indigenismo, which prioritised maintaining the “traditional” and performative aspects of indigenous identity, such as native dress, while eradicating cultural values and practices which defined their “otherness” within Mexican society.

Dr Gibbings continued on from Dr Morris by describing how there were frequent intellectual exchanges across the Guatemala-Mexico border; for example, Miguel A. Asturias noted Mexico’s process of mestizaje after his visit in the 1920s but did not believe it could be applied to Guatemalan society, instead encouraging European immigration to Europeanise Guatemalan society. She then explained that after becoming independent in the nineteenth century, the western part of Guatemala became the political and economic heart of the country because of the growth of the coffee economy in the highlands. The growing economy resulted in widespread migration into indigenous highlands and mobilised indigenous communities as a labour force for coffee planting. Similarly to Mexico, the 1944 to 1954 revolution was largely led by the middle class and urban students which aimed to go to the countryside and “civilise” indigenous groups through education indicating, Dr Gibbings argued, that it was a revolution from above. The contentious issue in Guatemala was the unequal distribution of land – such as these coffee-plantations, which moderates believed could be tackled by redistribution amongst the campesinos. This process would be headed by the elites as top-down agrarian reform; this change provoked revolution from below as it encouraged indigenous labourers to petition for land. Dr Gibbings argued that these petitions became a vehicle for historic restitution, because completing the required sections of the petitions allowed for indigenous groups to write about how the land had historically belonged to them before it was stolen and colonised. These petitions posed a threat to the landed elite and companies like the United Fruit Company thereby leading to a CIA-supported military coup in 1954 to overthrow the revolution. 

Dr Morris concluded the presentation by describing the Nicaraguan Revolution of the 1980s. Similar to Guatemala, the Somoza dictatorship was backed by the United States and oversaw the unequal division of land, of which a small elite owned 90% that was frequently leased to American companies in industries such as mining, fishing etcetera. A guerrilla movement emerged during the 1970s and successfully overthrew the Somosa dynasty in 1979. The revolution was seen as a “beacon of hope” by many who hoped it would be an anti-imperialist, left-wing (but not authoritarian) revolution that would end socioeconomic and political disparities and institute social reform. In order to understand the reception of the revolution, Dr Morris took time to note the geographical divides within Nicaragua, outlining htat the Caribbean coast was never fully conquered by the Spanish, and so the coastline became known as the Miskitu territories, where the Miskitu and Mayangna communities lived. While the Miskitu and Mayangna were not entirely opposed to the revolution when it initially reached the Caribbean coast, they soon believed that the dictatorship, although oppressive, had generally allowed for their ethnic and cultural differences to continue undisturbed. Therefore, as the revolutionaries attempted to assimilate Miskitu groups into the “new nation” through education, similar to policies in Mexico at the beginning of the century, the Miskitu found their cultural autonomy challenged and attempted to resist. The disturbance led to rumours that the Miskitu were separatists and wanted to break away to form their own state. The tension between the revolutionaries and indigenous population culminated in the former forcing the latter to leave their villages and into camps in the jungle, further alienating the communities. As indigenous people escaped these camps, they often fled to Honduras where the Contra army was organised and supported by the CIA, who were providing arms to counterrevolutionaries – the Sandinistas did not see a difference between distinct indigenous groups and so everyone was treated as pro-American counterrevolutionary subversives. The civil war continued through the 1980s into the early 1990s when the Sandinistas were defeated at the ballot box by centrist-right wing liberals. 

Having provided a brief yet comprehensive overview of the three revolutionary countries, the floor was open to a discussion which cannot be justly reproduced here. The discussion allowed for the speakers to further develop earlier points and for other members of the Teach-out to ask questions. Themes covered included the failure of left-wing revolutionaries to successfully incorporate indigenous movements into their cause, without they themselves denying indigenous rights to autonomy, and also explored the gendered dimension of the revolutions, which saw the inclusion of women but no substantial launch of a women’s liberation movement. However, for me the most interesting part of the discussion was circling back to the concept of “indigeneity.” Dr Chabal asked how the development of indigenous identity has challenged neoliberal ideas, such as multiculturalism. In response to this question Dr Gibbings referenced Charles Hale’s argument on the indio permitido, or “permissible Indian”. Indio permitido is a term borrowed from Bolivian sociologist Silvia River Cusicanqui who argued that society needs a way to discuss and challenge governments that use cultural rights to divide and domesticate indigenous movements. Hale therefore concluded that indigenous communities are allowed to build rights and establish platforms of culture so long as they do not hinder or challenge government schemes. Indigenous communities thereby become “permissible” if they act within the economic framework that the government establishes, but are then discredited if they disagree or attempt to act outside of those state frameworks. He writes “governance now takes place instead through distinction…between good ethnicity, which builds social capital, and dysfunctional ethnicity, which incited conflict.” Understanding “permissible” and “impermissible” notions of indigeneity can therefore help us to better understand indigenous participation within these revolutions: indigenous groups were accounted for within the “new nations” when they adapted to the values of the forming nation-state, be it conforming to the national education system, learning Spanish or allowing for a top-down redistribution of land. If indigenous communities resisted or attempted to construct a communal identity outside these values they were then deemed counterrevolutionary or “subversive”. Dr Morris closed by connecting neoliberal ideas of indigeneity at the end of the twentieth century to the perception of indigeneity at the beginning of the century; he argued that neo-liberal recognition of indigenous groups is not that dissimilar to indigenismo in that indigenous “traditional” practices, such as dress, dances etc. are seen as acceptable but there is no space made for linguistic difference or political representation. 

Grappling with the notion of “indigeneity” and representation left me challenging my own perceptions of indigenous identity. Discussing indigenous narratives within history and competing perceptions of indigeneity urges us to interrogate our own approach to talking and writing about indigenous history, and understanding how we incorporate an indigenous perspective into the narrative of revolution. Perhaps this final thought is the most productive part of a Teach-out: to have individuals leave examining their own approach to research and education with the hope that new spaces will continue to form to re-evaluate and develop multiple narratives and perspectives.

Teach-Out Review: How Slavery Changed a City: Edinburgh’s Slave History

Written by Lewis Twiby.

As part of the teach-outs currently happening in solidarity with the UCU Strike, the History Society and the African and Caribbean Society hosted a very informative talk on Edinburgh’s connection to the slave trade. Chaired by two History undergraduates, Jamie Gemmell and Isobel Oliver, three experts – Sir Geoff Palmer, professor emeritus at Heriot-Watt, Lisa Williams, the director of the Edinburgh Caribbean Association, and Professor Diana Paton, our own specialist in Caribbean slavery in HCA – gave short speeches, and then answered, questions about Edinburgh’s slavery connections. In keeping with the ideals of the strike, of resistance and hope for a future, the speakers aimed to move away from traditional narratives of subjugation, instead focusing more on rehumanising enslaves peoples, discussing resistance, and how we can educate others on slavery.

     Sir Geoff Palmer was first to speak, beginning his talk on how he moved to London from Jamaica, and eventually up to Edinburgh in 1964. He discussed how, where Potterrow is now, was where the Caribbean Student Association was located, and how this talk would never have happened in 1964. Sir Palmer then went on to discuss the economic and ideological ties Edinburgh had to slavery. This included how David Hume used slavery as evidence for Africans being of lower intelligence, which, in turn, became a justification for the enslavement of Africans. He further highlighted how the literal structure of Edinburgh is partially built upon slavery. Scots owned 30% of Jamaican plantations, amassing to around 300,000 people, and the staggering wealth which was made through slavery helped built the city. 24 Fort Street, 13 Gilmore Street, York Place, and Rodney Street all had slave owners living there – Rodney Street is even named after an admiral who defended Jamaica from the French. The person who received the largest government compensation following the abolishment of slavery in 1834, John Gladstone, lived in Leith and received £83 million in today’s money. Despite the dark history of exploitation, Sir Palmer had some hope. He emphasised how having these talks was a step towards a brighter future, and stated ‘We can’t change the past, but we can change the consequences’.

     Professor Diana Paton continued after Sir Palmer, and wanted to look at the everyday aspects of slavery, and the rehumanisation of those enslaved. She explained that many of those who had plantations in Edinburgh actually inherited them – the horrors of slavery meant that plantation owners had biological children, but they were fathered by on enslaved women in an exploitative system, and many were barred from inheritance. As a result, inheritance subtly spread the influence of slavery in Edinburgh. For example, the Royal Infirmary in the 1740s received £500 from Jamaican slaveholders as a donation, and in 1749 was left in a will a 128-acre plantation with 49 enslaved people. Margareta McDonald married David Robertson, the son of HCA’s ‘founder’ William Robertson, and then inherited a plantation from her uncle, Donald McDonald. The callous attitudes they held towards people showed the dehumanisation of slaves, according to Professor Paton. The infirmary, a place of healing, rented out slaves earning £20,000 a year in today’s money, and a letter in the 1790s from Margareta asked whether she would get money from selling her slaves. However, Professor Paton also wished to rehumanise those enslaved and try to piece parts of their lives back together. For example, using the inventory of the McDonald’s, she found out about the life of Bella, born in Nigeria she was around 30 in 1795, and tragically passed away in 1832 – just two years before emancipation. Professor Paton stressed that by looking for people like Bella we can remind the public that those enslaved were not just nameless masses, but real, breathing people.

     Lisa Williams then began her speech, stating that her own Grenadian heritage, and the works of figures like Sir Palmer, inspired her to create the Edinburgh Caribbean Association. Williams wanted to break the exploitation of black historical trauma by creating the Black History Walks – specifically it was not a walking tour of slavery, although slavery is covered. Instead, it traces the forgotten history of Edinburgh’s Caribbean and African population since the sixteenth-century. In the 1740s, where the Writer’s Museum is today, a black boy worked as a servant and was baptised; Malvina Wells from Carriacou was buried in St John’s Kirkyard during the 1887; and how the mixed-race Shaw family even inherited slaves. Williams further emphasised the ideological impact of slavery, both in the past and today. Some white abolitionists, including William Wilberforce, exposed racist beliefs, so non-white abolitionists, like the Robert Wedderburn, challenged slavery and racial bigotry. Meanwhile, John Edmonstone from Guyana taught Darwin taxidermy and biology, something now believed to inspire him to go on his journey where he began developing the theory of evolution. She then discussed how the impact of slavery in Scotland today impacts education. Pride in the Scottish Enlightenment, a lack of teaching in the past, and racism in present society, a by-product of slavery, meant that this has been forgotten by society. However, she further argued that shifts in public opinion over reparations, including Glasgow University’s recent announcement that they would start looking at reparations, opens the doors for new educational opportunities. She concluded saying that the first look at African history and slavery should not be through the slave trade, instead it should be with African civilisations being taught in schools and the events of the Haitian Revolution.

     The question section, split into two with set one by the hosts and set two from the audience, cannot be adequately summarised here. This section of the teach-out allowed the speakers to elaborate on ideas they had wanted to discuss earlier, and the intellectual and emotional impact from this cannot be accurately represented here. Instead two themes cropped up throughout the discussion: education and decolonisation. Even then, these two themes were interconnected and can be best described as education through decolonisation. Sir Palmer, for example, spoke of how more research was needed to trace the economic and intellectual connections institutions had to slavery. Old College was partially funded through plantation profits, and how graduates from the medical school went to work on slave ships and plantations. This was echoed by Williams and Professor Paton – Williams cited how UncoverEd literally uncovers the forgotten history of the university, and how this was needed to be done elsewhere, not just in universities. Professor Paton echoed that the study of the Scottish Enlightenment had to be radically challenged, how their views on race helped justify slavery and the emergence of racism how we know it today. This further raises the question of should we even be naming buildings and raising statues of these people? The passion of the speakers is one thing to take away from this – Williams’ drive to challenge heritage sites in Scotland to acknowledge slavery and abolition, and Professor Paton’s description of education and public memory in Scotland about slavery as ‘insulting’ highlighted their desire for change. A direct quote from Sir Palmer remains with me, and shows why we need to study the past and decolonise, we have to ‘find out what is right, not do what is wrong’.

Casualisation, Contracts, and Crisis: The University in the early 21st Century

Interviews conducted and written by Jamie Gemmell.

From the University of Edinburgh’s various prospective student webpages, you would conclude that teaching lay at the heart of the institution. In their words, Edinburgh offers “world-class teaching” and is “always keen to develop innovative approaches to teaching.” Whilst the quality of Edinburgh’s teaching may not be in doubt, it is apparent that, judging by the way the institution treats staff, teaching is near the bottom of the university’s priorities. Over the past few months I have conducted interviews with Dr. Tereza Valny (Teaching Fellow in Modern European History), Dr. Megan Hunt (Teaching Fellow in American History), Dr. Kalathmika Natarajan (Teaching Fellow in Modern South Asian History), and Professor Diana Paton (William Robertson Professor of History). This piece aims to give voice to some of their experiences, putting a face to some of the more opaque problems raised by the ongoing industrial dispute between the UCU and Universities UK.

Three of my interviewees are “Teaching Fellows,” a position frequently defined by its contractual vagueness. On the surface, this short-term position is designed to provide opportunities for early career scholars, with an emphasis is on teaching and other student-facing activities. Often, the role is financed when a permanent member of staff acquires a large research grant. Theoretically, it’s a win-win: a more senior scholar can dedicate more time to their research, whilst a more junior scholar can gain some of the necessary skills and experience required for a permanent position. The reality is very different. In Dr. Valny’s words, the Teaching Fellowship is “extremely exploitative and really problematic.” In her experience, it meant being “plunged into an institution” to run modules and “figure[ing] it out as you go along.” Similarly, Dr. Natarajan referred to the contract as “precarious.” She finds the contractual obligations “so overwhelming, that I often … need a bit of a break,” leaving her unable to conduct research in her, unpaid, spare time. 

One of the primary issues around the Teaching Fellowship is the workload. Whilst Dr. Hunt’s contract stipulates that she should be working around twenty-nine to thirty hours per week, in reality she works “easily double that.” If she doesn’t have “specific plans on a weekend” she “will work.” Even then, she remains in a “cycle where you never quite get on top of it.” Dr. Natarajan puts it a bit more diplomatically, suggesting that her hours “definitely stretch more than the average work week.” Under the department’s carefully calibrated workload framework, five hours of one-on-one time are given to each tutorial group for a whole semester and forty minutes for a typical undergraduate essay – that includes engaging with work, writing up feedback, and discussing it with the student. Obviously, this is not sufficient. Dr. Hunt concludes that if she worked the hours laid out by the workload framework, her classes “would be turning up and saying let’s have a chat.” Even as a Professor, these issues do not fall away. Whilst working to contract as part of the UCU industrial action this term, Professor Paton has been able to spend much less time preparing for teaching than she normally would, only “scanning over primary sources” and “relying on long-term knowledge” when it comes to the secondary literature. By focusing on quantifying time so precisely, the institution has failed students completely, relying on the goodwill of the University’s employees. It hardly reflects a desire to introduce “innovative approaches” to teaching. 

With workloads so high, it is common for early career scholars to become trapped in teaching positions. Advancement in the sector relies on putting together a strong research portfolio – that means articles in highly regarded journals and respected book publications. As one of the University’s primary sources of income is research funding, scholars with reputable research backgrounds are crucial. However, Teaching Fellowships, by their very nature, stipulate little to no time to research. When I asked Dr. Natarajan how many hours she dedicated to research she laughed and said, “absolutely none.” Despite developing many of her key ideas through her teaching, Dr. Valny has never had the “space to take those ideas” and transform them into a book proposal. This can lead to anxiety and stress. Dr. Natarajan’s PhD is “constantly at the back of my mind,” yet she rarely finds significant time to transform the piece into a monograph. Without the adequate time allocated to research, these scholars can never advance. Dr. Valny, rather depressingly, concludes that if she continues within a Teaching Fellowship she will become “unemployable” in any other position. With her contract expiring in August this year, it appears that this possibility could become a reality. Her situation reflects a broader problem where staff dedicated to their students and teaching are not rewarded for their work.

The emphasis on research has led to pernicious discourses that have devalued teaching, further demoralising many early career scholars who find themselves ensnared in these roles. In contrast to her time in Prague, where she was rewarded for producing popular courses (although still employed only temporarily), Dr. Valny finds herself suffering from feelings of “imposter syndrome” and “guilt, or inadequacy” when confronted with suggestions that she need only apply for research grants to escape her role. For Dr. Hunt, being “respected for what I already do quite well,” would be more appreciated. She claims that “institutionally it (teaching) doesn’t matter.” By being “a good teacher,” she has risked her career being “put on hold, if not completely stalled.” Similarly, Dr. Natarajan has found her teaching being treated as “a side-line” or a “side-note” to research. Performative professionalism has often defined these scholars’ teaching approaches, hiding an institution that disregards teaching and actively encourages academics to move away from teaching. This is despite some Teaching Fellows, such as Dr. Valny, accepting that a permanent teaching position would be “actually fine.”

These issues around workloads and casualisation intersect with the brutal policies of the Home Office, frequently referred to as the “hostile environment.” Home Office regulations stipulate that only “highly-skilled migrants” can live and work here, meaning those on short term contracts face another level of instability. For Dr. Natarajan, this has been a major source of precariousness. Dr. Natarajan can “only stay as long as I have a job or, rather only as long as I have a visa and the visa depends on my job.” If Dr. Natarajan or her husband fail to secure another job, after their current contracts expire, they risk deportation. Within the sector more broadly, advertisements for short term jobs often assert that only those with a pre-existing right to reside can apply. This issue throws cold water over criticism that stereotypes strikers as middle-class whites. Demonstrably scholars of colour, often, in the words of Dr. Natarajan “have their own very different set of precarious circumstances.” 

Many of these issues reflect deeper structural problems within the higher education sector.  Scholars frequently cited the removal of the student cap and increase in tuition fees, reforms from 2010, as exacerbating pre-existing issues and transforming education into a commodity. Dr. Natarajan has suggested that the university has become a “business venture,” whilst Professor Paton claims that there was an “almost instant” change in the way students and management conceptualised higher education after 2010. Over the years, under Professor Paton’s analysis, this “quantitative increase has become a qualitative change,” putting pressure on staff and students. Despite student numbers and tuition fees increasing, Dr. Hunt suggests that “the service that people are paying” for is not being provided. Rather, money flows into marketing and big projects that elevate the positions of senior management figures.

The university sector appears to have reached a tipping point. On a micro level, staff are under increasing pressure, with workloads increasing and casualisation becoming more widespread. A two-tier system has developed, with early career scholars expected to teach more and research less. Goodwill and professionalism appear to be the only things preventing university teaching coming to a standstill. On a macro level, the sector has become partially commercialised with fees privatised and universities encouraged to compete for students. This has occurred without a concomitant provision of consumer rights, leaving students forced to accept higher levels of debt without safeguards in place to demand improvements or changes in the service provided. These institutions have been left in some middle ground between state-funded institution and privately-funded business venture, to the detriment of academics and students. Demands being made under the ongoing industrial dispute are hardly radical. Many academics are simply requesting greater job security and more respect for the work they do. If universities aren’t designed to support students or academics properly, we are all left asking who on earth are they designed for?  

Beyond Pop: The Extremes of 1970s Britain

Written by Jack Bennett.

The music of the 1970s reflected the extreme divisions and polarisations within Britain, revealing the intersection of popular culture, politics and economics. What emerged during this decade was a cyclical process of adoption and outpacing regarding cultural trends. The idealised utopianism adopted by the youth of the 1960s receded with the appearance of hard-edged styles, which was then reversed during the 1970s, seeing the emergence of hyper-Mod working-class cool in the form of skinheads, building upon the earlier Teds and Mods. While the influence of glam rock introduced a resurgent androgyny to the streets of Britain, challenge and usurpation of style and cultural pre-eminence became the defining factor of the decade. Nowhere is this better presented than in the punk movement. The music of the 1970s mirrored these cultural and stylistic fluctuations: this can be seen in the way Soul picked up in Northern clubs from Wigan to Blackpool to Manchester; the struggle between the concept albums of the art-house bands and the arrival of punkier noises from New York in the mid-seventies and the dance crazes that ebb and flow in popularity. Musical styles begin to break up and head in many directions in this period, coexisting as rival subcultures across the country. These changes were fundamentally driven by the traversing of tumultuous, uneven and complex socio-political landscapes.

Currents of popular music transformed during this decade, both through revolutionary change and continuation. Notably, despite the rise of new styles such as reggae and ska, this did not result in the demise of rock ‘n’ roll nor Motown. The Rolling Stones and Yes carried on, oblivious to the arrival of the Sex Pistols and the Clash. Within this melting pot of musical and stylistic chaos during the 1970s, it is important to emphasise that the life it lived and its soundtrack are not quite the same. For instance, between the early fifties music characterised by Lonnie Donegan and the mid-seventies’ stylings of Led Zeppelin, real disposable income exactly doubled. Yet from 1974 until the end of 1978, living standards actually went into decline, marking an end to the long working-class boom. It was this dissolution of the previously upheld Post-War Consensus which had committed consecutive Prime Ministers and leading parties to the maintenance of low unemployment and social welfare support. By the 1970s, as a consequence of economic instability and pressures such as the OPEC oil crisis of 1973 (which resulted in nation-wide strikes and a three-day working week), the nation was plunged into darkness.

This darkness subverted the earlier optimism under which British pop was invented – between 1958 to 1968 – when the economy was undergoing rapid expansionism. The changing mood entering the 1970s was caused by increasing unemployment, as the total number of Britons out of work passed 1 million by April 1975. There was a general attitude that a blanket of bleakness had been cast over the nation, and socio-cultural realist escapism was sought as a remedy. This second phase involved the sci-fi ambiguities and glamour of Bowie, the gothic, mystical hokum of the heavy rock bands like Black Sabbath and Led Zeppelin, and the druggy obscurities of Yes. The second half of the seventies were the years of deep political disillusion, strains which seemed to threaten to tear the unity of the UK: Irish terrorism on the mainland, a rise in racial tension, and widespread industrial mayhem. Most notable of these socially, politically and economically calamitous and transformative events was the Winter of Discontent in 1978. Due to widespread industrial unrest and strike action bringing the nation to its knees, The Sun reported the tumultuous events and portrayed Prime Minister James Callaghan’s intransigence towards the situation through the headline “Crisis? What Crisis?”. The optimism which had helped fuel popular culture suddenly began to run dry. What emerged was a darker, nightmarish inversion of the optimism and vibrancy that embraced the music and culture of the 1960s.

A darker, nightmarish inversion which was expressed most notably through punk. This creatively explosive, politically astute cultural and musical movement offered an anti-establishmentarian, liberating assault on mainstream decencies grounded in the philosophy of nihilism. One of the most iconic bands of this movement, The Sex Pistols, following their formation most explicitly positioned themselves as the antagonists of The Beatles. As a result, music became a source of power in the battle with authority and repression, expressing the self-loathing and pessimistic attitude of the decade. In response to the punk aesthetic and attitude there developed a seeping moral panic within Britain. Surrounding the growth in prolific, confrontational, violence and controversial actions – punk and the Sex Pistols in particular became a publicity engine attacking the established rock pantheon and encapsulating the emotion of the decade. The press and politics only served to further these already ingrained opinions. From concerts known for their wild and uncontrollable crowds, to juvenile political attacks in songs such as ‘Anarchy in the UK’ and, in the year of the Silver Jubilee, ‘God Save the Queen’. Punk became a vehicle of expressing opposition to the social and political net which enmeshed the nation during the 1970s.

Yet punk was the first revival of fast, belligerent popular music to concern itself with the politics of the country, and this was the first time since the brief ‘street fighting man’ posturing of the late sixties when mainstream society needed to notice rock. On the other side of the political divide was an eruption of racist, skinhead rock, and an interest in the far-right political orientation. Among the rock stars who seemed to flirt with these ideas were Eric Clapton, who said in 1976 that ‘Powell is the only bloke who’s telling the truth, for the good of the country’ – referring to the infamous 1968 Rivers of Blood speech made by the Conservative MP Enoch Powell. As well as David Bowie, who spoke of Hitler as being the first superstar, musing that perhaps he would make a good Hitler himself. These notions were a far cry from the 1960s utopian optimism in the future for Britain and the youth culture. Reacting to the surrounding mood, Rock Against Racism was formed in August 1976, helping create the wider Anti-Nazi League a year later. Punk bands were at the forefront of the RAR movement, including above all The Clash and The Jam. ‘Black’ music such as reggae, ska and soul, with strong roots in the Caribbean immigrant populations throughout Britain as well as African American influences, became a major cultural force, crossing racial divisions and promoting decisive turn against the rearing head of a racist demagogue in the music culture of Britain. Ska revival bands such as the Specials and the reggae-influenced The Police and UB40 had a greater impact than typical ‘popular music’. The seventies produced, in the middle of visions of social breakdown, a musical revival which revived the ‘lost generation’. This effectively marginalised the racist skinhead bands and youth culture which was strongly related to the National Front at this time and were renowned for violent, racially motivated attacks across the country, pushing them out of the social and musical environment of Britain. As one cultural critic of the time put it, ‘A lifestyle – urban, mixed, music-loving, modern and creative – had survived, despite being under threat’. Despite the era-defining social, political and economic struggles of the 1970s, music became an expression of cultural values and movements. The radical nature of generational transformation in the 1970s produced a new youth culture that was increasingly splintering during this period.

For Geoff Eley, the decade was the storm centre of a change in the narrative of post-war national identity, destabilised by the 1960s and rendered more aggressively patriotic by the New Right. Defined by an internal chronology of escalating problems. Lynne Segal counters this preconceived narrative, arguing that during the 1970s major strides and flourishment occurred in relation to homosexual rights, anti-racist and feminist movements. For example, in 1975-76, while embroiled in rampant inflation around 25%, legislation was enacted on equal pay, sexual discrimination, race relations, domestic violence, and consumer rights. This demonstrates the ambiguity and fracture of the decade, which for many saw liberation and power rather than just crisis and decline. A decade of grit and glamour.


Image source: Patrick Sawer, ‘’We ran the NF out of town’: how Rock Against Racism made Britain better’, The Telegraph, 27 April 2018,

https://www.telegraph.co.uk/music/concerts/rock-against-racism-made-britain-better/, accessed on 8 February 2020. 

Black, Lawrence. “An Enlightening Decade? New Histories of 1970s’ Britain.” International Labor and Working-Class History, no. 82 (2012): 174-86. 

Marr, Andrew. A History of Modern Britain, London: Pan Macmillan (Reprints edition), 2009. 

British Culture and Society in the 1970s: The Lost Decade Edited by Laurel Forster and Sue Harper, Cambridge: Cambridge University Press, 2010. 

War & Peace: Art in Ducal Milan

Written by Joshua Al-Najar.

Art was a key tool for renaissance cities to disseminate ideas and fashion an identity in a pluralistic, competitive society. Scholarship has tended to focus on the programmes undertaken in republics, such as Florence and Venice – perhaps less considered is how dynastic systems were able to deploy the Renaissance’s lessons in the form of state art. One prominent example is Milan, a duchy, where humanism, classical learning and heritage guided the patronage of art to strengthen the authority of the ruling duke. This was a response to the perceived vulnerabilities of this approach to rule. 

Authority and status were conveyed using classical learning in the art of ducal Milan with deeply distinct motives. Where republican regimes used themes tied to civic humanism, the Dukes of Milan deployed the lessons of antiquity in the creation of ‘renaissance magnificence’. This concept was ultimately rooted in individualistic veneration and regarded the act of conspicuous spending on elaborate works as a display of virtue; as such, patronage of sumptuous artworks could be used to the heighten the status of the individual patron, as well as being considered to ‘better’ the city generally. Jane Black identifies the root of this rationale in the neo-Platonic tradition, where outward beauty was thought to reflect inward virtue. This concept could be suited to regimes such as the Duchy of Milan, where power was concentrated in an individual, dynastic ruler, rather than a faceless office.

Louis Green diverts from the work of Black, by suggesting that the emergence of renaissance magnificence was not linked to the typically accepted neo-Platonic tradition. Instead, he points to a political, Aristotelian-style explanation as demonstrated by Azzone Viscont’s attempts to display authority in 14th century Milan. Azzone, one of the last tyrant strongmen, had rapidly assembled a series of territories in northern Italy that lacked cultural continuity; one method by which this could be achieved was a programme of artistic works that centred around Visconti’s unifying role as ruler, and patron. The success of Visconti’s magnificence was memorialised by his theological adviser, Galvano Fiamma, who recorded in his Opusculum de rebus gestis ab azone, Luchino et Johanne Viceomitibus (1334-5) that:

Azzo Visconti, considering himself to have made peace with the church and to be freed from all his enemies, resolved in his heartto make his house glorious, for the Philosopher says in the fourth book of the Ethics, that it is a work of magnificence to construct a dignified house.

Fiamma clearly outlines the political advantages to a ruler who was willing to invest in lavish surroundings. In addition, his reference to Aristotle’s Ethics mounts support for the explanation of magnificence advised by Green.

Visconti put renaissance magnificence into practice, as he embarked upon an extensive programme of artistic patronage that celebrated the Duke on an individual basis. As part of the rejuvenation, the Chapel of the Blessed Virgin was renovated in gold and blue enamel detailing, as well as an enormous, elaborate tomb for himself (Fig. I). However, it was in the secular space of the Ducal Palace that Visconti sought to heighten his status in overt terms.  In the main hall of the re-purposed Palazzo del Broletto Vecchio, Visconti commissioned a series of paintings – of which no extant examples survive – that are believed to have been the work of Giotto di Bondone. The works are thematically linked to concepts of war, strength and military success; these would have been ideal themes for a strong-arm ruler, such as Visconti, to emphasise in artistic works. Personally, Visconti had numerous military successes, and had regained many territories that his grandfather Matteo I Visconti had lost in the Late-Mediaeval period. Therefore, pictorial references to war would have reminded beholders of Azzone’s numerous successes.  Visconti appears physically in the painting too, alongside historical nation-builders, such as Charlemagne and Aeneas. By juxtaposing himself with the legendary Trojan, Visconti incorporates himself into the ranks of an ancient, heroic tradition as well as displaying the classical refinement of his court. 

This process continued under the patronage of Galeazzo Maria Sforza (1444-76), who embellished his own personal status in the renovation of the Castello di Pavia. Though it would later be destroyed by the French in the early 16th century, numerous literary records attest to the various paintings that adorned the castle. Stefano Breventano, a Milanese chronicler, recorded that the palace was ‘the loveliest building that could be seen in those days’. A series of frescoes designed for the galleries of the Piano Nobile show conformity with typical, princely activity: the Duke taking petitioners; the duke and duchess engaging in falconry; and lastly, the duke effortlessly killing a stag during a hunt. The last of these scenes demonstrate the Duke’s engagement with what would later be called sprezzatura, by Baldissare Castiglione’s The Book of the Courtier (1528). The duke’s effortless demeanour whilst showing great skill is an attempt to convince the beholder of his individual supremacy.

However, behind this veneer of princely status was an unpopular, tentative leader. Galeazzo Maria Sforza had shown little authority within diplomatic and military spheres, and thus, attempted to create a commanding figure in visual art. Sforza attempted to assert his authority and amplify his status by giving his rule a veneer of legitimacy; technically, the Sforzas had conquered Milan in the 1450s. In his attempts to legitimise his regime, Sforza tried to provide visual links to the preceding Visconti line.

Unlike at Venice, where historical reference was made to the city’s achievements as a whole, Sforza continued the artistic legacy of the Viscontis in an attempt at dynastic continuity. This attempt is reflected in a letter from the Ducal secretary, Cicco Simmonetta dated from August 1469, that details a number of restorative works to be undertaken by Bonifacio Bembo. Cicco commented on the ‘maintenance of the old paintings’, as Bembo was instructed to carefully conserve the decorative panels from the era of the Visconti (Fig. II). This included numerous tissone, with a flaming branch and bucket that had served as an emblem for Filippo Maria Visconti – who happened to be Sforza’s maternal grandfather. Evelyn Welch has suggested that Sforza sought to extol his links to the previous regime by carefully conserving its symbols and iconography. The tisonne was incorporated into the decoration of the ducal apartments. Welch understates the significance of this move – in this period, nominally private rooms such as bedrooms would have essentially functioned as public spaces, receiving petitioners and housing illustrious guests. Therefore, providing pictorial reference to these links would aid in the transition of power to the Sforza regime and make up for deficiencies elsewhere. Sforza juxtaposed these images with that of his personal court, in an attempt to bond the two. Ultimately, Sforza’s attempt to generate authority through artistic continuity failed: Breventano remarked that he was a “lustful, unpopular duke” which may go some way in explaining his assassination in 1476 by a group of Milanese officials.

Milan was a city where heritage, antiquity and mythmaking were crucial in artistic patronage. Ultimately, this was geared towards the specific anxieties that accompanied a dynastic regime, where power was concentrated in the individual.


A black and white photo of a building

Description automatically generated
Figure I : Reconstruction of the Tomb of Azzone Visconti by G. Giulini.

A double photo of a building

Description automatically generated
Figure II: Restored section of decorative panels (1468-9), Castello di Pavia, Pavia.

Source: Green, L., ʻGalvano Fiamma, Azzone Visconti and the Revival of the Classical Theory of Magnificenceʼ, Journal of the Warburg and Courtauld Institutes, 53 (1990), 10.

Source: Evelyn Samuels Welch, ʻGaleazzo Maria Sforza and the Castello di Pavia, 1469ʼ, Art Bulletin, 71 (1989), 361. 


Black, Jane. Absolutism in Renaissance Milan : Plenitude of Power under the Visconti and the Sforza, 1329-1535. Oxford, [England] ; New York, N.Y.: Oxford University Press, 2009.

Dooley, Brendan. “M Onica A Zzolini . The Duke and the Stars: Astrology and Politics in Renaissance Milan.” The American Historical Review 119, no. 3 (2014): 1004-005.

Green, L., ʻGalvano Fiamma, Azzone Visconti and the Revival of the Classical Theory of Magnificenceʼ, Journal of the Warburg and Courtauld Institutes, 53 (1990).

Norbert Hulse & Wolfgang Wolters, The Art of Renaissance Venicearchitecture, sculpture and painting (1990).

Richardson, Carol M., and Open University. Locating Renaissance Art. Renaissance Art Reconsidered; v. 2. New Haven [Conn.] ; London: Yale University Press in Association with The Open University, 2007.

Ruggiero, Guido, and Wiley InterScience. A Companion to the Worlds of the Renaissance. Blackwell Companions to History. Malden, MA ; Oxford: Blackwell Publishers, 2007.

Evelyn Samuels Welch, ʻGaleazzo Maria Sforza and the Castello di Pavia, 1469ʼ, Art Bulletin, 71 (1989), pp. 352-75.

Welch, Evelyn S. Art and Authority in Renaissance Milan. New Haven; London: Yale University Press, 1995.

New York and the LGBTQ+ Community over a Century

Written by: Lewis Twiby.

The anonymity of big cities allows persecuted sub-cultures and identities to find room to exist. London, Berlin, and Paris are just three examples of cities with flourishing LGBTQ+ communities. In the United States, New York was one of the major sites for gay liberation. Throughout the twentieth century a flourishing and diverse LGBTQ+ community emerged where class, race, gender, and sexuality intersected, paving the way for the gay rights movement to emerge. This article aims to show a snapshot into this diverse movement over a period of a century, from around 1890 to 1990, and how LGBTQ+ culture emerged in New York.

George Chauncey argues that the emergence of a, principally, homosexual subculture began emerging in New York in the 1890s when Columbia Hall was reported as the ‘principal resort in New York for degenerates.’ An unfortunate trend in history is the marginalisation of those who are not included in the standard hegemonic order – whether by class, race, or any other reason. In the Euro-American mindset – something which was also forced on many cultures worldwide thanks to colonialism – same-sex relations, non-binary genders, and non-conforming gender roles were treated as ‘degeneracy’ or a mental illness. In the 1870s a ‘map’ was printed warning Latin American businessmen visiting New York of the type of ‘degenerates’ they could encounter including prostitutes, shoeshine boys, and a ‘fairy’. Other than the standard demonisation of those left excluded from the Gilded Era economic expansion, it shows the distrust of LGBTQ+ individuals. The term ‘fairy’ was widely used as a way to further demean male homosexuals, especially by drawing images of femininity. An investigator – homosexuality was classed as ‘indecent’ and consequently illegal – alleged that patrons to the Columbia Hall ‘are called Princess this and Lady So and So’. Misogyny and homophobia went hand-in-hand.

The working-class slums of New York, such as the Bowery, offered young men and women an ability to socialise outside more traditional bourgeois family units which emerged in the late-nineteenth century. ‘Scandalous shows’ aimed at titillating consumers soon evolved into bars and clubs where people were free to experiment with same-sex relations, or opportunities to challenge gender identities. As often what occurs in marginalised communities a new lexicon started emerging. Seeing an increase in use during the 1920s, ‘gay’ started being used as a way for homosexual men to recognise one another – by calling themselves ‘gay’ they could secretly identify other homosexuals, and those involved in the community. However, there was not one ‘gay community’ in New York. Gender and racial segregation harshly split the community, and among white men there were those who wanted to be distanced from ‘fairies’ – those who cross-dressed or were gender non-conforming.

During the 1920s and 1930s, encouraged by an air of secrecy fostered by Prohibition, New York developed two major gay enclaves: Greenwich Village and Harlem. Greenwich Village originated as a refuge for rich New Yorkers to escape the bustle of the city, but as the city expanded the rich moved out and impoverished migrants, mainly Italian, moved in. The ‘Village’ became known for its bohemian character as its quiet location and cheap housing invited in New York’s artists and writers. This bohemian character fostered an atmosphere of single-living and eccentricity allowing the LGBTQ+ community to live openly. The Village was known as the place for ‘long-haired men’ and ‘short-haired women’, and even radical challenges to society. Famous anarchist Emma Goldman would visit the Village in the 1920s and make speeches demanding gay rights. However, there was a limit to this freedom. Racism excluded gay African Americans and Puerto Ricans from the Village until after the Second World War. Following the First World War 6 million African Americans moved from the US South to escape economic poverty and intense racism. Due to Northern segregation they were forced to form their own communities, and one of these was Harlem.

1920s Harlem is best known for the Harlem Renaissance – a period of cultural revival where resident African Americans produced a wide variety of literature, poetry, art, and music. For example, jazz and blues properly emerged during this period. Part of the Harlem Renaissance saw the emergence of a gay enclave. Part of this was racialised – white artists declared that Harlem was ‘wide open…Oh, much more! Much More!’, in the words of artist Edouard Roditi, as they could enter these spaces openly. LGBTQ+ African Americans, who had to live in Harlem, could not have this luxury, but they made it their home regardless. The Hamilton Lodge ball attracted hundreds of drag queens, and their performances attracted thousands of spectators – many of them were black or Latino. From this the ‘ball culture’ emerged and subtly made an impact on white beauty standards. Contouring was originally used by drag queens in Harlem to emphasise their cheekbones to look more stereotypically feminine. LGBTQ+ people further shaped the Harlem Renaissance: the ‘Queen of the Blues’ Bessie Smith was openly bisexual, one of the creators of jazz poetry was Langston Hughes has been seen as possibly homosexual or asexual, and singer Ethel Waters went into a lesbian relationship.

It is important to not understate the levels of discrimination and outright oppression New York’s LGBTQ+ community faced. Gay clubs were often given discriminatory names, the Hamilton Lodge was called the ‘faggot club’, and LGBTQ+ people were regularly referred to as degenerates. In 1924, the play God of Vengeance by Sholem Asch opened on Broadway for the first time, and the theatre owner and the actors were charged with obscenity as it played with themes of lesbian identity. During Prohibition, speakeasies did give a new community the ability to experiment with their sexuality, while at the same time opening new excuses for the police to raid gay clubs. In 1940, New York allowed police to use a Prohibition-era law to continue raiding gay clubs until the 1960s. Post-war, issues even get worse. Joseph McCarthy said the homosexuals were communist sympathisers, or could be used by them, beginning the ‘Lavender Scare’ to go alongside the Red Scare – 420 government employees were fired between 1947 and 1950 for suspected homosexuality. The resurgence of conservative values – a view that society should be Christian, white, middle-class, and in heterosexual nuclear families – meant that any deviation from this was viewed as ‘un-American’. Gay bars across Harlem and the Village were raided, and the police at times sexually assaulted lesbians and trans-individuals to ‘prove’ their gender.

Meanwhile, the 1960s saw times of great changes. As women and African Americans began fighting for their rights, LGBTQ+ communities also started fighting for their rights. The first gay rights movements were formed in the 1950s, notably the Daughters of Blitis and Mattachine Society, and largely campaigned for rights in Washington. A slow rights movement started building up, but their only biggest achievement was in 1967 when ‘sip-ins’ forced New York bars to allow homosexuals to have drinks. The ball scene was still thriving and was growing. RuPaul Charles and Lady Bunny moved to New York and became famous for their presence in the ball scene, and Marsha P. Johnson viewed the Village as a ‘dream’. Johnson had moved to New York for the anonymity – as a poor, African-American, homosexual, and gender non-conforming individual she saw many layers of intersecting oppression. One of the key places to be for the gay community was the Stonewall Inn. Stonewall was owned by the mafia and only made it a gay club as they knew the LGBTQ+ would not report them to the police with homosexuality still being illegal in New York. An unexpected police raid would spark the key event in American LGBTQ+ rights.

On June 28, 1969 police raided the bar and began assaulting patrons who appeared gender non-conforming. When one was being arrested a riot broke out – in popular memory Marsha P. Johnson ‘threw the first brick at Stonewall’. Singing We Shall Overcome and chanting Gay Power, the patrons started fighting off police, and by the time backup arrived a crowd of over a hundred people had arrived to support the patrons. Sylvia Rivera, a Latino trans-woman and close friend of Johnson, later remembered that: ‘You’ve been treating us like shit all these years? Uh-uh. Now it’s our turn!… It was one of the greatest moments in my life.’  It is important to note that many of those involved were African American or Latino, and many were trans or non-conforming, as years of oppression based on race, gender, and class gave them the urge to say ‘no’. Elizabeth Armstrong and Suzanna Crage have argued that a big reason why Stonewall, and not of the other clashes with police, became the spark of the gay revolution was thanks to the first Gay Pride event. A bisexual woman, Brenda Howard, saw the impact Stonewall had and used the first anniversary of the riot to host the first Gay Pride event, and solidify the legacy of Stonewall.

In the aftermath of Stonewall the gay rights movement started in earnest. For the first time gay rights moved away from Washington and into New York – many of those who took part in Stonewall would go on to create new rights movements. Deeply inspired by the Black Panthers the Gay Liberation Front (GLF) was formed to directly fight homophobia in society. Like the Black Panthers they viewed capitalist society as reinforcing discrimination, and vowed to fight capitalism, the nuclear family, and traditional gender roles. As a way to become increasingly diverse a lesbian chapter was formed, called the Lavender Scare, and Marsha Johnson and Sylvia Rivera formed STAR, (Street Transvestive Action Revolutionaries), for impoverished trans and non-conforming young people. These movements were an incredible break with the past as they directly forced gay rights into the open. Directly calling themselves ‘gay’, now firmly associated with homosexuality, was an open challenge to the taboo over homosexuality.

Resistance continued throughout the 1970s and 1980s despite some monumental successes – namely having LGBTQ+ identity no longer being classified as a mental illness in 1973 and lifting the ban on homosexuality in New York in the early-1980s. Homophobia did not end here, and there were still immense challenges to overcome. A resurgence of conservatism under Richard Nixon would become amplified by Ronald Reagan’s emphasis on ‘family values’ which continued the demonisation of LGBTQ+ identity. When the AIDS crisis broke out, as it largely affected poor and non-white LGBTQ+ communities, the government did nothing to help and even cut funding to finding a cure. The shadow of the AIDS crisis still hangs over the LGBTQ+ community – the continued popularity of the musical Rent, despite its problematic treatment of non-white and LGBTQ+ characters, highlights this by having a major trans-character die due to AIDS. Tragically, Marsha P. Johnson was murdered in 1992, and a mixture of transphobia, homophobia, and racism meant that the NYPD refused to investigate – her murder remains unsolved. 

During the dark years of the late-1970s and the 1980s the LGBTQ+ community continued to fight on. In 1985 black feminist Audre Lorde released her pamphlet I Am Your Sister calling for white feminists and male African American activists to understand the intersection of homophobia, racism, and misogyny proudly ending the text ‘I am a Black Lesbian, and I am Your Sister’. The ball scene in black and Latino communities remained strong, and the documentary Paris is Burning brought them to attention. Highlighting drag queens overcoming poverty and discrimination, tragically a trans-woman interviewed was murdered during filming, it gives an insight into the ball scene of the late-1980s. Although controversial as the interviewer, a white woman, never appears and the profits were never given to the community, it helped propel ball culture to mainstream eyes. Several phrases, especially thanks to their regular usage in the reality show RuPaul’s Drag Race, have since become part of wider, straight, lexicon including ‘voguing’, ‘reading’, and ‘shade’.

New York is one of the most diverse cities in the world, and the LGBTQ+ community still is a key part of this. Since 2013, the Republican party and some sections of the Democrats have been embracing homophobia, and since 2016 have been openly advocating for transphobic policies. These policies are naturally disheartening – decades of fighting appear to have been destroyed within just a few years. However, by looking at New York’s LGBTQ+ community fight for rights despite intense oppression over a century, it gives hope for the future. No matter how dark the future gets, there will always be a Marsha P. Johnson to fight back.


Armstrong, E. and Crage, S., ‘Movements and Memory: The Making of the Stonewall Myth’, American Sociological Review, 71:5, (2006), 724-751

Chauncey, G., Gay New York: Gender, Urban Culture, and the Making of the Gay Male World, 1890-1940, (New York, NY: 1994)

Duberman, M., Stonewall, (New York, NY: 1994)

Eisenbach, D., Gay Power: An American Revolution, (New York, NY: 2006)

Livingstone, J., Paris is Burning, (1990)

Lorde, A., I Am Your Sister, (New York, NY: 1985)

Shikusawa, N., ‘The Lavender Scare and Empire: Rethinking Cold War Antigay Politics’, Diplomatic History, 36:4, (2012), 723-752

Stein, M., (ed.), The Stonewall Riots: A Documentary History, (New York, NY: 2019)

Image: https://2019-worldpride-stonewall50.nycpride.org/history-news/a-global-celebration-arrives-in-new-york/