The Quagga and Colonialism

Written by: Lewis Twiby.

On 12 August 1883 the last known quagga died in captivity in Amsterdam Zoo; surveys could find no traces of quagga in the wild, confirming its extinction. Long thought to be a species of zebra, DNA tests in the 1980s found it to be a subspecies, once common across the plains of what would become South Africa. Unlike other infamous cases of animals being driven to extinction by human activity – most notably the moa of New Zealand and dodo of Mauritius – quagga had lived alongside humans for millennia. In fact, the name ‘quagga’ partially comes from the local Khoikhoi name. Instead the extinction of the quagga was deeply entwined with imperial culture and the formation of settler rule in South Africa.

From the early-1600s Dutch settlers created colonies on the southern coast of what would become South Africa. From 1795 the British took over the colony to secure shipping routes to India, and clashes began between the Dutch and British settlers. To avoid British rule the Dutch farmers began what has since been known as the ‘Great Trek’ after 1836; these ‘voortrekkers’ would later become a key part of Afrikaner national identity, especially as British rule tried to reassert itself over the voortrekkers. The white settlers claimed they were pushing into ‘free’ land where they could make a new start, however, this claim was at the expense of Africans. Although there was no intensive sedentary farming, that did not mean that the land was actually unclaimed. Various African peoples made claim to the lands, hosting a range of different states and economic structures ranging from pastoralists to small-scale farming to the expansionist Zulu Empire. These voortrekkers enslaved or displaced Africans from their land and helped destabilise the Zulu Empire to prevent them from being a threat.

The arrival of Europeans changed how the environment was treated. Although it is important not to fetishize pre-colonial land usage – wide-scale pastoralism had caused increased pressure on the land in Zulu and Xhosa communities – it is important to stress how land usage shifted dramatically. Just as in the American West, land areas of the southern African land were divided between individual farms (of varying sizes) which limited where wild animals could move. For herding animals, like the quagga, wide areas are needed so they have plenty of food to eat without destroying the local area – millions of zebra and wildebeest make the trek from the Serengeti in Tanzania to the Masai Mara in Kenya for this reason every year. Herds of quagga, therefore, tried to go on their regular grazing grounds but were faced with Boer farms. To prevent the quagga from competing with their own grazing herds, or from eating their own crops, farmers resorted to shooting stray herds of quagga. Quagga meat was also a good way to get quick food without killing off a possibly prized animal, and their skins could be sold for extra funds.

At the same time, the quagga became a prized animal for menageries back in the metropole. The quagga’s unique skin made it an interesting addition for any wealthy elite’s personal collection – Cusworth Hall in my own town of Doncaster even had quagga grazing on its grounds in the 1700s. When the first zoological gardens started emerging in the 1820s, such as London Zoo, quaggas were in high demand for their appearance and for colonial experiments. Naturalists hoped to breed quaggas with horses to create a new species that could be used in both Europe and Africa. There is also an underlying colonial ideology about why exotic animals were in demand for zoos and menageries. As argued by Harriet Ratvo, having a seemingly rare, unique, or exotic animal was part of a wider imperial power dynamic – if you could have an animal from a colonised region it showed by the power of empire and your own wealth. It showed Britain’s power to be able to move an animal across the world, and the owner’s importance by engaging in this power play.

However, many zoos were unequipped to look after exotic animals initially, and it was not uncommon for new animals to die within a year. London Zoo’s A.D. Bartlett, who oversaw the animal population during the late-nineteenth century, wrote how they had to invest a lot to look after elephants and rhinos because they were hard to get, but as monkeys were cheap to obtain, they did not have to worry in case they died. Initially the quagga was viewed this way. Their large herd sizes and apparent abundancy meant that they were seen as dispensable, but still sought after, animals. Furthermore, brutal capture and transport of animals meant that many more had to be caught than what was needed due to high mortality.

These factors mentioned are what drove the quagga to extinction. Demand to fill zoos in Europe, and policies of extermination to preserve farms in Africa, meant that quagga numbers quickly dwindled. As they were only found in southern Africa it meant that the population rapidly went extinct – although common, they were only common in one area. London Zoo’s single mare was photographed five times between the 1860s and 1870s, before she died, by the zoo’s chief photographer Frederick York. The rapid extinction of the quagga meant that these are the only photos of a living quagga. The last known wild quagga was shot in 1878, and when the last one died in captivity in 1883 the zoo requested hunters find another, not realising how quickly it had gone extinct. Albeit, locally all zebras were referred to as ‘quaggas’ which may have caused the confusion. Thanks to colonial settlement and exploitation the quagga had gone extinct.

Studying the quagga shows the various ways colonialism impacted colonised societies. Unfortunately, the quagga was not the only case of settler colonialism driving animals to extinction – passenger pigeons, thylacines, and almost the bison suffered the same fate. The quagga offers a warning for the future. Neo-colonialism means the natural world is being destroyed in order to fund the economies of the global north threatening both humans and nature. Colonialism very likely will drive orang-utans, macaws, and caimans, just to name a few, to extinction.

Image source:

A Letter To My Students

Written by: Dr Jake Blanc.

A letter to my students:

I do not want to be on strike. None of your lecturers do. We would rather be inside our classrooms giving a lecture, or in a seminar room discussing a reading, or holding office hours to talk through an essay assignment. And given that the outside temperatures have been hovering in the low single digits, coming out to the picket line every morning is far from an easy or cheerful decision.

But we cannot come back in, at least not yet. And please believe me here when I give the reason for why we have to stay outside a little longer. We are on strike for you, our students.

You probably hear that a lot around universities these days. Touch-screen panels in every classroom: for the students! A new survey every week: for the students! Two-for-one Dominos pizza: for the students!

But when I say that me and my colleagues are on strike for you—for the students—it reflects something much more important. Choosing to leave our classrooms, to forego our salary, and to hold up signs on a frozen sidewalk in your name, that is a deeply sincere statement.

Nobody goes into academia for fame or fortune. Unless you study celebrity culture or business history, you are unlikely to experience either or those two words in your daily academic life. Instead, the overwhelming majority of our time is spent thinking about, planning, and delivering pedagogy and mentorship to our students. And I would say that for almost every academic I know, that is precisely why we love our jobs.

But over the past many years (and decades!) universities have changed in ways that make it increasingly difficult, if not outright impossible, for us to give you the education you deserve. You likely have heard that our current strike has four core demands, relating to issues of casualisation, fair pay, equity, and pensions. Like any job that aims to be both part and a model of an inclusive society, ours relies on the foundation of steady employment, adequate compensation, equality amongst all employees, and the security of a dignified livelihood once we stop working. And each of the four relate to vital threads of what allows us to have the personal, financial, and mental wellbeing to come to work every day to help create the type of learning environment in which all members of a university community can thrive.

I won’t go into detail here on the four demands. That information is available elsewhere and, moreover, as a relative newcomer to the UK, I do not want to presume the cultural and institutional knowledge to properly talk through each item. (Though let’s not kid ourselves, our struggle here in Britain is part of the same struggle I would have faced if I had stayed in the U.S. or gone to teach anywhere else in the world).

Instead, I want to reiterate that I see you, that we see you. All of us, your lecturers, your tutors, your supervisors, your support staff, everyone. We all see you. We know that our decision to strike makes you stressed and worried. We know that our choice to keep you from your usual class routine makes you nervous about essays and exams. I’m sure it might even feel like we’re doing this in spite of you—or even worse, against you. Nothing could be further from the truth.

We’re doing this because we are frustrated, and tired, and overworked, and to be honest, pissed off. We are angry that the university has let our conditions, and our workloads, and our hiring practices degrade to such a point that we have to abandon our classrooms just to have our demands be taken seriously. A strike is not a strategy to be used lightly, it is a last-resort, break-glass-in-case-of-emergency type of option. And we are currently in that sort of moment.

Personally, I am three years into what I can hope will be a long career. I’d love nothing more than to devote my professional life to working with several generations of students, where my history courses can serve as a platform for students to make sense of the past, to learn to think critically, to write well, and to engage one another with empathy. If I’m lucky enough, many of you might even follow suit and become my colleagues one day, and then you’ll get to share in the joys of what, when supported properly, is the best job in the world.

But those hopes are contingent on something changing. And for us, that something can only come about by going on strike. We’ve exhausted all other options. Believe me, we don’t want to strike. But we care too much about doing our job well, and we care too much about you and your future, to not see this through.

So thank you for your support. And if not your support, then hopefully at least your trust that when we say we’re doing this for our students, we mean it.

Dr Jake Blanc

Lecturer in Latin American History

Review: A Tale of Two Cities by Jesse Hoffnung-Garskof

Written by: Lewis Twiby.

New York City remains one of the most culturally diverse cities in the United States having seen immigration from across the world for centuries. One of the many communities to call New York home is the Dominican community which Jesse Hoffnung-Garskof looks at in his 2008 book A Tale of Two Cities: Santo Domingo and New York after 1950. Hoffnung-Garskof offers an interesting insight into how diasporas and culture are formed. He is also keen to stress that diasporas do not exist in a vacuum – they interact with both the ‘homeland’ and other diasporas.

As expected, Hoffnung-Garskof begins his book in the capital of the Dominican Republic – Santo Domingo. Here he explores the twin ideas which would shape Dominican history: progreso and cultura. Progreso, the idea that Dominicans were moving to an improved life, and cultura, the concept that Dominicans had to exhibit certain cultural tropes to achieve progreso, would shape both Santo Domingo and New York. A recurrent theme throughout the book is how progreso and cultura evolved in the context of migration. Rural Dominicans saw Santo Domingo as being one of the most important places contributing to cultura, but New York was seen as the pinnacle of cultura. These ideas were also in flux thanks to the turbulent politics of the republic – the genocidal rule of Rafael Trujillo lasted until his assassination in 1961, followed by the dictatorship of Joaquin Balaguer, US occupation, and a turbulent revolution. In Santo Domingo, Hoffnung-Garskof, relies heavily on oral testimony: emerging barrios (which became shantytowns) saw an explosion of grassroots culture and political activism giving ample opportunity to hear subaltern voices. For example, Hoffnung-Garskof shows how cultura was seen as being Catholic, speaking Spanish, and, unfortunately, racialised against Haitians where those in the barrios turned cultura on its head. Political radicals would have their meetings at church services, and young men would play loud music in Spanish as a way to rebel without being attacked by the police.

Moving away from Santo Domingo, Hoffnung-Garskof then takes us to Washington Heights, Manhattan where the Dominican diaspora emerged. Originally, the diaspora was made up of radicals exiled by either Trujillo or Balaguer, but as air costs became cheaper, more and more Dominicans moved to the land of ‘progeso y cultura.’ In what is perhaps the most interesting section of the book Hoffnung-Garskof looks at how the newly arrived Dominicans became racialised in Manhattan. These Dominicans were from a middle-class background back in the Dominican Republic but found themselves in a working-class situation; this caused a paradoxical situation when returning home to visit family members. Dominicans would engage in American consumerism which their family would take as signs of wealth, but domínicanes de Nueva York had to try to explain that they were not wealthy. Meanwhile, they were forced into the racialised world of American society. For generations, Dominicans had considered themselves ‘white’ against ‘black’ Haitians, which caused Trujillo to massacre thousands of Haitians to ‘whiten’ the country, but they were not seen this way in Washington Heights. The area had large Irish, Jewish, African-American, and Puerto Rican communities, so Dominicans were forced to reinvent their identity based on the ever-changing categories of class, race, and culture in Manhattan. Hoffnung-Garskof effectively shows this with his wide range of oral testimony from various community members in Manhattan – easily the strongest aspect of the book is his ample usage of first-hand testimony. However, he could have expanded Manhattan’s history of immigration here a lot more. Jewish and Irish communities are mentioned, but are somewhat overlooked, and the city’s vibrant East Asian, Cuban, Arabic, South Asian, and African diasporas are entirely ignored. It would have been interesting to see how they factored into the Dominican experience in shaping their identity.

Hoffnung-Garskof in the early 1990s worked as a social worker for Dominican families in the Washington Heights schools, and his lengthy discussion of diasporas in schools is his most detailed section. Again, using interviews he manages to recreate, in detail, the various lives of Dominican students, and how they forged their own lives. We see some using their wealth to become doctors, others joining African-American rights groups like Umoja to fight for rights, or clash with African-Americans and Puerto Ricans over racial animosities. Reading it you can tell this has been a passion of his for a long time, and how deeply he cares about the community. This is especially visible when he discusses the crack epidemic of the 1990s – Washington Heights became synonymous with drug crime in the US media. He rebukes many of the common motives associated with Dominicans during the time, showing it was a crisis of capital rather than a moral failing. My favourite point was his criticism of leading attorney, and later New York mayor and Donald Trump’s personal lawyer, Rudy Giuliani for targeting Dominican youths in his exposé on crack, but entirely ignoring the crack epidemic of the Wall Street elite. However, as Hoffnung-Garskof is so invested in the lives of the people of Washington Heights, it does break the flow of the entire narrative. He was so eager to show us the entirety of Washington Heights that we read biography after biography in just two chapters making it, at times, hard to read. If anything, and hopefully he might do this in the future, these narratives could become their very own piece of historical writing.

Finally, I just want to quickly discuss how Hoffnung-Garskof links diasporas to the ‘homeland.’ As mentioned earlier, the diaspora was not cut-off from the Dominican Republic – ranging from family visits ‘home’ at Christmas to exiled leftists waiting for the fall of the US-backed regime. Here the twin ideas of cultura and progreso come into play. On the one hand, the New York based community were seen with a sense of pride back in Santo Domingo. The regular Dominican Day parades, growing affluence of the community, and even Dominicans partaking in beauty pageants were viewed as Dominicans achieving progreso – they had become the immigrant community to be emulated. However, they were simultaneously degraded as going against cultura. Women going out of the home, children engaging in American consumerism, and the adoption of American fashions were viewed as Dominicans becoming too Americanised. Domínicannewyork was invented to lambast a diaspora deemed too American. Nevertheless, American-based Dominicans still viewed themselves as ‘Dominican’ and not ‘Dominican-American.’ Newspapers like Ahora! reported on events in both New York and Santo Domingo, and the right to vote in Dominican elections was eventually granted to the diaspora. Hoffnung-Garskof ensures that the themes of cultura and progreso are never forgotten in the narrative.

For anyone interested in the histories of immigration, the formation of identity, and diasporas then A Tale of Two Cities is a must read. Almost, at times, needing a smoother narrative, Hoffnung-Garskof’s investment in the diaspora makes it an engaging read, and the abundancy of oral testimony turns the names on the pages into living, breathing people. He has recently released a book about Cubans and Puerto Ricans in New York, so hopefully we can see more of his writing soon.

Fear and Collective Memory: Remembering the HIV/AIDS Crisis

Written by: Rosie Byrne

The AIDS Crisis has caused over 35 million deaths worldwide since its outbreak in the 1980s; it produced widespread fear because of its threat to society as an unknown disease. It appeared to be most prevalent amongst the homosexual community, primarily men, intravenous drug users, haemophiliacs and Haitians, due to its spread via sexual contact and via blood. This led to the conception of the ‘4H’ group of primarily affected individuals by the Centre for Disease Control (CDC). In order to prevent the transmission of the disease, these social groups were often the victims of misinformed assumptions and, in some cases, ostracization. This was a period of crisis, mainly due to the overwhelming fear that pervaded society: preconceived social attitudes merged with misconceptions and were heightened by mass media coverage that explicitly warned the public about the threat of the disease to the individual. 

Fear arose primarily because of the unknown nature of the disease: the public largely became aware of HIV/AIDS in the 1980s because increasing cases were presented to doctors that were inexplicable both in their cause and their treatments. It was recognised as a disease that weakened the immune system, which was indicated by illnesses like pneumonia or Kaposi’s sarcoma, a rare cancer with skin lesions that became associated with AIDS. Nevertheless, it was unclear how the disease was transmitted and individuals were not aware of it until symptoms presented, by which time it was largely too late. Therefore, it appeared to pose a threat to society because it was unknown how it could be contracted or how to recognise someone who had the disease without a diagnosis. 

Furthermore, social attitudes towards individuals perceived to be carriers of the infection encouraged their exclusion by society. The high rates of transmission amongst homosexuals, haemophiliacs, intravenous drug users and Haitians led to their association with the disease, whether they were infected or not. This was particularly evident with the refusal to readmit Ryan White to school in the United States of America as he had contracted HIV through contaminated Factor 8 that is used to treat haemophilia. Moreover, it must be noted that AIDS was originally known as GRID, or Gay-Related Immune Deficiency, as it was primarily identified with affecting the homosexual community, and it was thought to solely affect this social group. Its further description as a ‘gay plague’ indicated the way in which fear was employed during this period; the allusion to previous pandemics such as the Black Death and illnesses such as cholera and typhus only exacerbated societal fears during this period. 

High rates of transmission also considerably impacted perceptions about the way in which disease was spread and gave rise to many misconceptions about the contraction of HIV/AIDS. These social groups that were seen as primary sources of contagion were socially stigmatised to the point of avoidance because of public belief about AIDS, and this appears to continue: in a 2014 report, the British public thought that HIV was spread through kissing (16%), sharing a glass (5%), spitting (16%), a public toilet seat (4%), and coughing or sneezing (5%). It is evident that these fears were not completely unfounded; whilst the transmission of HIV via saliva is small, it was still a concern for public health. Particular concern in Britain was also drawn to Princess Diana’s insistence that she shake the hands of patients with AIDS without gloves when she opened the newly-established HIV/AIDS ward at London Middlesex Hospital in 1987.

This widespread fear was arguably advanced by the media and mass advertising campaigns to raise awareness and develop public knowledge about AIDS. The British government spent around £5 million on a mass media campaign that used promotional videos, films, and posters for public information in order to prevent the spread of HIV/AIDS. The most prominent imagery associated with these are the ‘iceberg’ and the ‘monolith’, both produced in 1987; they were films narrated by John Hurt, whose voiceover warned about the threat of AIDS to “man or woman”, reinforcing the overwhelming tagline of the campaign: “don’t die of ignorance”. The use of symbolism relating to death such as tombstones and white lilies that might be placed upon a grave encouraged widespread fear of a diagnosis of HIV as ultimately a death sentence. This was further emphasised by the soundtrack employed, that implied doom using church organs and melancholic music. It is notable that leaflets with the phrase “don’t die of ignorance” were sent to every house in the UK. Whilst the extent of promotional material arguably escalated public consciousness about the epidemic, it also fostered public fear and concern about the threat of AIDS. It must be noted that testimony from the 1980s in relation to the AIDS Crisis largely references public health campaigns that warned about the spread of the disease, and the fear that they felt as a reaction to such published material.

Improvements in medical research and subsequent treatments has meant that an HIV diagnosis is not a death sentence; around 100,000 people in the UK live today with HIV. Treatments such as post-exposure prophylaxis is used to reduce the viral load of HIV within the blood to undetectable levels, meaning that it does not develop into AIDS as it did in the 1980s, when it was incurable. Nevertheless, social stigma still continues as a result of surviving misconceptions about HIV/AIDS. This has been brought to light in several documentaries, especially that of Gareth Thomas, a Welsh professional rugby player who announced he was HIV positive but to untraceable levels on a BBC documentary in September 2019. As such, it is clear that the HIV/AIDS crisis has concluded, but its sociological effects still remain thirty years on.

Image: A 1985 protest in New York City, the hub of the AIDS epidemic and the corresponding art movement. Source:

The 19th Century California genocide

Written by: Prim Phoolsombat

The definition of genocide by the United Nations Genocide Convention is as follows: any of the following acts committed with intent to destroy, in whole or in part, a national, ethnical, racial or religious group, such as: killing members of the group; causing serious bodily or mental harm to members of the group; deliberately inflicting on the group conditions of life calculated to bring about its physical destruction in whole or in part; imposing measures intended to prevent births within the group; forcibly transferring children of the group to another group.

On 18 June 2019, California governor Gavin Newsom officially recognized and apologized for the systematic genocide of California’s Native Americans. Since Europeans first arrived in North America, the near total decline in indigenous populations is often attributed to Old World diseases spread beyond the control or responsibility of settlers. Epidemics seemed to conveniently destroy thousands of diverse tribes from coast to coast, leading to the popular image of unpopulated American land and a justification for a manifest destiny philosophy. The term “manifest destiny” was coined in 1845 by a newspaper editor to summarize the aggressive and divinely ordained expansion of colonizers westward and their duty to spread democracy and capitalism as they went. It was in the name of “God’s will” that the first California government ordered ethnic war on Native Americans — a calculated campaign comparable to that of the Tutsis in Rwanda, the Dzungars in Central Asia, and other well-known genocides.

Crimes against California’s Native Americans began in 1769 with the Spanish occupation. Father Junipero marched from Mexico to San Diego with an army to construct missions that would eventually reach San Francisco. Prior to his arrival, the Pacific coast was a well-populated, well-resourced, and culturally diverse area, with total population estimates ranging between 100,000 and 700,000. The various tribes spoke 80 different languages and had such ample access to wild foods that farming was not necessary. Father Junipero’s campaign immediately raped, kidnapped, and enslaved native people to work for the missions. Whether in the Santa Barbara or San Diego missions, the natives attempted to revolt and failed in each location due to armed Spanish forces. Executions and abuse were routine in addition to thousands of native deaths from smallpox and other Old World diseases. Their bodies filled mass graves — no natives were given burials.

Mexico won independence from Spain in 1821, the coastal missions were secularized in 1834, and Spain officially recognized Mexico as an independent state in 1836 after a decade of war. A decade later, however, the Mexican-American War began, and California became a state in 1850. The Gold Rush began simultaneously, and the Californian population swelled up with 300,000 new residents. The first governor of California, Peter Hardeman Burnett, said “that a war of extermination will continue to be waged between the two races until the Indian race becomes extinct” in 1851. He started a lasting policy that compensated anyone for proof of a dead Native American.

Coupled with friction over land and mining, workers began shooting Native Americans on a daily basis. Whole villages were pillaged over minor conflicts — the Pomo tribe of 800 was slaughtered by federal cavalry on the basis of two murdered slave owners that tribe members were rebelling over. Not only did California compensate for murder, but the federal government acted on it as well, in addition to funding over twenty California militia campaigns against Native Americans. A number of gangs were formed between miners for the purpose of killing Native Americans to protect their self-interests. It is estimated that 80% of the Native American population was decimated during this time. Between the 1850s and 1870s, conservative estimates begin at 4,500 deaths and range up to 100,000 deaths.

The Anti-Vagrancy Act of 1855, also known as the Greaser Act, legalized the arrest of those subjectively believed to be “vagrants.” Though this act mostly targeted Mexicans, it was also used against Native Americans and Asian immigrants. The California Act for the Government and Protection of Indians was enacted in 1851. With its remarkably misleading name, the legislation reflects the fifth act of genocide in the UN definition — “Forcibly transferring children of the group to another group.” Although the state of California was admitted into the union as a “non-slave” state, this act allowed for the indenturing of “vagrant” native children to Whites in order to “save them from their savage upbringing”, convert them religiously, and exploit them for labor. They could also be sold as punishment, and their only way out was through bond or bail.

Attempts at reparations have been made, though very poorly. Congress established the Indian Claims Commission in 1946 amid growing sentiments that Native Americans deserved reparations after all their services in World War II, where 13% of Native American men enlisted and provided military advantages such as speaking native languages to communicate — a code Axis intelligence could never break. The Commission ultimately gave out an average of $1000 to each Native American across 176 tribes, but most of the money was deposited in government- managed, and consequently mis-managed, trust accounts.

Governor Newsom’s apology is the state’s first time acknowledging its actions in the genocide. His executive order establishes a council that will create a report detailing tribe narratives and tribe-state relations by 2025. While tribe leaders have appreciated these steps, no further action has been taken on their critical asks such as land reparations or water and fishing rights. The full gravitas of the genocide, its consequences for Native Americans today, and the actions that should be underway to properly compensate are sparsely a part of the public consciousness, much as it has been since the Spaniards first occupied California.

Photo credit: California Gov. Newsom’s Office.

Seneca Revisited

Written by: Justin Biggi.

Content Warning: This post contains graphic discussions of violence, gore, and self-harm.

Seneca’s tragic works are known for being, at the very least, polarising. Making liberal use of gory violence, they have often been considered liberally ‘sensationalised’ versions of the Greek plays they draw from. However, a number of scholars have, in recent years, begun to rehabilitate Seneca’s tragic violence, and read it through the lens of his own Stoic philosophy. As such, violence becomes at times a cautionary tale on the excess of emotions, an example of the ways in which furor rules one’s life, or an aid when confronting one’s own mortality. I believe that an ulterior dimension can be added to how we read Seneca’s use of violence if we read it through the lens of modern-day horror theory. 

The sharp contrast between Seneca’s frequent use of violence in his works and his Stoic philosophy can often be puzzling. Tragedy is where we find some of Seneca’s bloodiest, violent and most intense images. In Hercules Furens (54 CE), Amphitryon describes in great, gory detail the way in which Hercules is driven to madness by Juno and kills his children: ‘the arrow, piercing the middle of the neck, flies through the wound’ (Herc. Fur. 994 – 995), ‘[t]he room is covered in his scattered brains’ (Herc. Fur. 1007). Another example is Medea (50 CE), where the titular character performs a gruesome sacrifice to Hekate, using her own blood: ‘[m]ay [it] drip down onto the altar. Stricken, I have gifted the sacred liquid’ (Medea 811). Additionally, in stark contrast to Sophocles’ Oedipus Rex, Seneca’s Oedipus rips out his own eyes, as opposed to using his mother’s brooch. The details given by Seneca are both realistic and absolutely chilling: ‘[w]ith curved, greedy fingers he finds his eyes and … yanks both eyeballs from the depths of his sockets, by the roots … he breaks away the last of the filaments from his sockets, so inexpertly torn’ (Oedipus 965 – 976). 

Amy Olberding argues that Seneca’s use of violence ‘invites empathetic apprehension of the felt personal quality of death’ which results in pushing readers to reflect deeply on death and dying. In her interpretation, she focuses primarily on Stoicism’s relationship to death, and Seneca’s own discussions of what constitutes a ‘good’ death and what does not. According to Olberding, the ‘particular’ of specific examples of violent deaths allows Seneca to prohibit his readers from interacting with death as a simple concept. Rather, it becomes an ‘event’ that cannot be ignored or denied, therefore helping them to ‘meet death well’ by extensively reflecting on it. His graphic depiction of violence serves an important educational role as it does not allow the reader the comfort of simple self-reflection. Instead, it complicates the issue of death, and invites a deeper level of understanding of it. Violence, in the form of specific examples, becomes a physical space that the spectator is forced to inhabit. Of course, this visualization is brought a step further when we bring theatre into the fold, seeing as theatre brings it beyond the imaginative powers of the reader.

In modern-day horror, we see a similar pattern of violence and gore. Not only is violence often excessive, it also serves a similar purpose: through it, the genre allows for larger questions of mortality, death and the body to come into play. This process occurs in two phases. On the one hand, we have the recognition of the dead body as a fundamental aspect of the recognition of the self. On the other hand, we have the inherently dehumanising nature of violence. Julia Kristeva identifies the witnessing of a dead body as an act which causes a violent recognition of the self. She calls this process “abjection”. It is the recognition of the self through an understanding of one’s physical presence. The human mind, however, is ill-equipped to fully understand what this means and recoils in terror; the ‘refuse and corpses show me what I permanently thrust aside in order to live’. What should be a moment of recognition causes instead a depersonalisation. The self, brought to the forefront by witnessing a dead body and being therefore made fully aware of death, rejects itself and its newfound physicality. In the horror genre, this depersonalisation is emphasised further through the frequent use of violent deaths. Horror, Adriana Cavarero argues, ‘has to do with repugnance’, a repugnance which lies in one being unable to justify ‘violence for violence’s sake’, which signifies the loss of individuality and personhood.

The depersonalisation and self-recognition typical of the horror genre I described above are also employed by Seneca in a similar manner. In his plays, the violence is so physically present that it becomes inevitable. It must be confronted. Through this confrontation, the audience must then come face-to-face with their fear of death. The process presented by Olberding bears a striking similarity to Kristeva’s description of the process of abjection. The audience, in witnessing the violent act, is forced to come to terms with their own mortality, and then modify their own approach to death – similarly to how, when confronted with death in the form of a body, Kristeva describes feelings of terror which culminate in a rejection of death. Furthermore, the violence is, in and of itself, a cause of horror and repulsion. This contributes to the forming of an empathetic bond between the audience and the subject matter. This empathy, I argue, is born because of the audience’s awareness that violence is inherently dehumanising, as Cavarero describes. 

In conclusion, horror theory can help us build on other scholars’ interpretations of the use of violence in Seneca. By applying concepts such as abjection and dehumanising violence, we are able to see not only the ways in which Seneca’s own approach to violence was strikingly modern, but also how it has continued to this day in other forms of media, such as the horror film or novel. Seneca uses violence as an educational tool, and yet this educational aspect is what also contributes to its excessiveness. In the context of horror studies, it is clear how he makes use of violence as a dehumanising tool to further strengthen the audience’s empathy towards the characters, as viewing a person violated and eventually dead pushes the audience to re-examine not only their relationship to their own body as a physical object, but also to death as an inevitable element of their life. Unlike Kristeva, however, Seneca hopes that this confrontation will lead to a better, healthier relationship with death, rather than one of sheer terror. 


Cavarero, A., Horrorism: Naming Contemporary Violence. (New York: Columbia University Press, 2008).

Kristeva, J. Powers of Horror: An Essay on Abjection. (New York: Columbia University Press, 1982). 

Olberding,  A., ‘A Little Throat Cutting in the Meantime: Seneca’s Violent Imagery’, Philosophy and Literature, 3.1. (2008), p.133. 

Image: Alamy

‘Tipu’s Tiger’ and the Importance of Visual Language

Written by: Laila Ghaffar

In the narrative of the British colonisation of India, it would be very easy to understand the Indians as passive and helpless in the face of rapid British expansion. After all, history is written by the winners. However, one look at ‘Tipu’s Tiger’ and an entirely different story is conveyed. 

The statue, which is on display at the Victoria and Albert museum in London, depicts a life-size tiger mauling a European soldier lying on his back. The tiger entirely overwhelms the soldier beneath. But the wooden statue is not just a visual display of might and ferocity. Hidden behind a hinged flap within the tiger is an organ, which can be exposed by turning the handle next to it. Upon doing so the soldier’s arm goes up and down, and noises intended to resemble screams are produced by the automaton. Hence, the statue invokes a multi-sensory experience of terror and alarm. 

It comes as no surprise that the patron of the work, Tipu Sultan (1750-1799), who ruled the Kingdom of Mysore, was a fierce enemy of the British. After taking part in his first Anglo-Mysore war at the mere age of seventeen, he dedicated his life to relentlessly opposing the expansion of the East India Company, and engaged them in four separate rounds of fighting from 1767 to 1799. He was quick to recognise the British threat to the independence of India and urged the rulers of neighbouring kingdoms not to align themselves with them. His letter to the Nizam of Hyderabad in 1796 states: 

Know you not the custom of the English? Wherever they fix their talons they contrive little by little to work themselves into the whole management of affairs.

And, indeed, he was right. The British system relied entirely upon dismantling and draining the kingdoms and princely states of their resources, rendering them entirely dependent on the Company. Equally, the British relied upon inflaming religious sentiments to better facilitate their expansion in India. Here too, Tipu recognised the importance of conserving the Indo-Islamic tradition which had endured for centuries in India. As a Muslim ruler of a Hindu majority kingdom, he ensured that the Hindu temples within Mysore were protected as state property. Moreover, his personal library was compiled of over 2,000 books written in Arabic, Persian, Sanskrit and Urdu, representing the linguistic traditions of the major religions active in the Indian Subcontinent. Both publicly and personally, Tipu was tolerant of diversity and treated all religious groups residing in his kingdom with respect. 

Yet perhaps the biggest surprise and challenge for the English was Tipu’s fascination with modern technologies. He attempted to engage with technological developments outside South Asia, and looked to French military advancements for inspiration. Thus, his army was supplied with sepoy flintlocks, which were far more effective and advanced than the British matchlocks. He also experimented with the use of water power to drive machinery, another example of imitating French technology. Furthermore, Tipu turned his gaze eastward and sent envoys to southern China to bring silkworm eggs back, with the aim of establishing and encouraging sericulture in Mysore. This is a tradition that has endured and shaped the cultural and economic landscape of contemporary Mysore, and is just one part of Tipu’s significant legacy on the region. Therefore, Tipu frightened the British, because, as British historian William Dalrymple suggests, ‘he was frighteningly familiar’. Hardly resigned to the inevitability of British colonialism, he possessed a powerful imagination and utilised Western technologies against its creators. 

Tipu’s imagination also influenced the way he manipulated visual language. He adopted the tiger as his symbol and adorned his possessions in tiger memorabilia. Jewelled tiger heads adorned the finials of his throne, and the coinage of Mysore was stamped with tiger stripes. Moreover, his soldiers wore uniforms with tiger stripe patterns sewn into them, leaving no doubt as to their allegiance, or their ferocity on the battlefield. All of Tipu’s personal possessions were embossed with tiger stripes, such as his swords and guns. Hence, Tipu very closely and calculatedly intertwined his personal association and rule with the symbol of the tiger, an animal which has traditionally been understood to represent the entirety of India. The effect of this visual association was deeply profound on the British. Upon Tipu’s defeat and death in his capital of Seringapatam in 1799, each British soldier involved in the victory was presented with a medal, on which one side depicted a lion wrestling a tiger to the ground and the other bearing the Arabic words ‘Assadullah al-Ghaleb’, meaning ‘the conquering lion of God’. The implication here is clear: the British lion has emerged victorious over the Indian tiger, Tipu. The desire to assert this notion using the same visual language as Tipu, reveals how significantly his branding of himself as the tiger had affected the British psyche and morale. 

Tipu’s swords

It is probably worth noting that examples of Tipu’s possessions are highly prized collector’s items and fetch very highly at auction. On 23 October Sotheby’s will auction one of Tipu’s swords – with a tiger stripe pattern embossed on the blade and a gold tiger head handle, for a high estimate of up to £150,000. This only proves that the affiliation of the tiger with Tipu’s memory has withstood the test of time, despite subsequent British efforts to dismantle his memory and legacy. 

The story of Tipu is an especially potent example of the importance of visual language in contributing to the way in which history is told and understood. ‘Tipu’s Tiger’ diverges far from the traditional account of the Indians as submissive, whilst also contributing to our understanding of Tipu as a single man and ruler. While history may be written by the winners, the appreciation of visual language may give rise to a more nuanced and complex awareness of narratives and characters. 

Tipu’s Tiger


Victoria and Albert Museum. (2019), V&A, Tipu’s Tiger, (online) avaliable at: ; (accessed 20 October 2019).

ThoughtCo. (2019), Biography of Tipu Sultan, the Tiger of Mysore, (online) available at: ; (accessed 20 October 2019). 

Dalrymple, William, (2019), The Guardian, An Essay in Imperial Villain-Making, (online) available at: ; (accessed 20 October 2019).

Scottish History and Archaeology, (2019), Tipu Sultan and the Siege of Seringapatam, (online) available at: ; (accessed 20 October 2019). 
Sotheby’s (2019), Auction Lot 251: A Rare Sword with Burri-Patterned Watered-Steel Blade, from the Palace Armoury of Tipu Sultan, India, Seringapatam, Circa 1782-99, (online) available at: ; (accessed 20 October 2019).

Remembering the legacy of Kowloon Walled City

Written by: Prim Phoolsombat.

Before its demolition in 1994, Kowloon Walled City occupied only six-and-a-half acres in Kowloon Province, Hong Kong and had the world’s highest population density ratio. With a chaotic reputation for opium dens, brothels, and crime syndicates, it’s complex history as a political no-man’s-land between Chinese and British authorities throughout the twentieth century has rendered it a famed, almost fantastical site of cultural memory. It is highly romanticised as a stand-alone phenomenon of anarchy, despite the city’s tight-knit community being very much integrated with British Hong Kong.

In fact, the dark, modern perception of Kowloon residents are based on stereotypes encouraged by British colonial authorities who wanted the city destroyed. Even though colonial conflicts caused the city’s poor conditions, and crime rates went down, the city’s reputation is still thought to be the fault of its residents’ choices. Seen by some as an extreme example of what is to come in cities globally (from overpopulation, late capitalism and/or organically developing “anarchies”), enclaves like the Walled City teach us the consequences of colonialism and demonizing communities for political power.

The humble beginnings of Kowloon Walled City can be traced back to the Song Dynasty (860-1279) with a customs station. Then, during the Qing Dynasty (1644-1912) in 1841, the British occupied Hong Kong and the Treaty of Nanking was signed in 1843. The first construction of a recognizable Walled City came in 1846, when the Qing fundraised to construct a small fort with walls and canons. The city became valuable to the Qing as a good site to shore up on coastal defense technology. In 1898, Hong Kong’s New Territories were leased to the British, putting the city in British territory.

From then onwards, a back and forth ensued for the next fifty years; China wanting to retain control over Kowloon City, and the British wanting to destroy it. After strong resistance from both parties, the city entered a state of political purgatory where neither authority controlled it. After World War II, British authorities banned opium dens, brothels, and other vices in Hong Kong. These businesses moved swiftly into Kowloon Walled City, as it was seldom patrolled, sparsely populated, and under ambiguous jurisdiction.

The city’s reputation, population, and physical prowess formed rapidly between the 1950s and 1980s, especially as refugees from the Cultural Revolution fled to Hong Kong. Unregulated construction led to compact, topsy-turvy high-rises. However, buildings were limited to thirteen or fourteen stories so airplanes could land at nearby Kai Tak International Airport, which was partly built with Kowloon’s stone walls. Rent was cheap without taxes and buildings were connected by dark, narrow alleys. Within them lay a maze of schools, illegal factories, charities, illegal food and butcher shops, family homes, and criminal headquarters.

From an ivory tower, the city seemed to be a blemish on British Hong Kong’s modernising, Western excellence. However, the unfavourable coverage ignored the necessary dependence of the city on Hong Kong’s demands to sustain its businesses. The customers who sought Kowloon’s prostitutes, the drug suppliers who perpetuated Kowloon’s opium dens, the punters who played in the gambling dens — they came from Hong Kong. There were no consequences for the taxi companies openly advertising these vices to transport Hong Kongers to and from the city. The residents were also the victims of police brutality and corruption. A landmark case in 1959 by a Hong Kong judge declared that criminals captured in Kowloon city were subject to Hong Kong law. Police activity increased and officers would blackmail residents with demands under threat of arrest.

Additionally, the city’s lack of hygienic infrastructure became associated with the residents as further proof of inherent “dirtiness” — more justification for its eradication from the British. Because the city did not belong to Hong Kong, it was not connected to water. Kaifong (街坊) associations (local councils formed by residents) formed as more residents and refugees came, and along with charity groups, they routed water into the city. However, the city was constantly dripping. Indeed, the city was described as having a micro-climate, where visitors and factory workers entering the hot and humid bottom floors always used umbrellas to shield from the leaky, makeshift pipes above. The precious roof-top spaces became cool gathering places at night, as well as play areas for children, trash dumps, and was a place for pigeons to be raised.

In 1984, reunification was set for 1997. Despite evidence by the 1980s that crime rates in Kowloon Walled City were no higher than in other areas of Hong Kong, it was declared in 1987 that the city was to be destroyed. It had to be destroyed before the turnover — one government official working on the project explained that if it was not then the Chinese government’s media could easily portray Kowloon Walled City as being a “nice” result of British colonialism.

The quirks and darker aspects of the city drew swaths of curious tourists before demolition. Already, the city was turning into cultural memorabilia for outsiders while residents scrambled to ensure compensation before eviction, especially in the face of Hong Kong’s soaring property value. After demolition, a park was built on top with Qing architecture and drainage systems, revising the space to represent clean, pre-colonial Chinese cultural greatness and ignoring the reality of the residents. The park was praised by British and Chinese authorities, avoiding the recognition Kowloon Walled City deserved as being the result of their conflicts.

Today, the memory of Kowloon is artificially reconstructed in themed casinos, arcades, video games, manga, cyberpunk fantasies, and more. Those seeking an authentic experience of an anarchy or a hedonistic community will find only exaggerated features of the Kowloon that was subject to power struggles outside of residents’ control. The ugly truth of the Walled City is not simply it’s criminal spaces and lack of infrastructure, but the transfer of responsibility for those features from colonial governments to the people who called Kowloon home.

The Significance of the Media in the Provocation and Resolution of the Conflict between Bosnian Serbs and Bosnian Muslims (1992-1995): An Analysis

Written by: Kvitka Perehinets

The media has always had significant political influence in communist societies, such as Yugoslavia, and often served ‘as a conveyor belt for the views of authority’. As long as that authority worked toward bringing Yugoslavia’s diverse society together ‘in the Titoist spirit of “brotherhood and unity”, it was not a problem. However, it soon became clear that as Yugoslavia fell apart, the media of the individual republics served not as an informational platform for its peoples, but rather as a tool for boosting support ‘for the stances taken by their leaderships’. 

After establishing himself as the leader of the Communist Party in 1986 and later as the leader of Serbia in 1989, Slobodan Milosevic quickly proved to be skillful with using national media as a loudspeaker for his ideas, as he was aware of its capability of effectively penetrating and manipulating society’s mindset. When opposition groups started claiming that Radio Television Belgrade (RTB) ‘…was biased in favor of the socialists’, the Socialist Party of Serbia responded by initiating the Radio and Television Act of 1991, resulting in the dismissal of radio and television management, and the unification of media into a single body, Radio Television Serbia (RTS). The law made it easier for socialists to banish reporters who were unwilling to cooperate with the party and bolster ‘…the official message of hatred and fear towards the other Yugoslav peoples’, highlighting not only how much control the regime had over mass media, but how important it was for the party to sustain that control. Having a legislative grip on media allowed for the manipulation of reporting and the unnoticeable integration of propaganda campaigns into respected news sources. Consequently, because there were very few alternative sources of information, what an average Yugoslav believed depended on their media intake and what their media was telling them. The Milosevic regime was successful in making official state media the main outlet for information: while some independent publications and television networks remained intact, they soon lost meaning either due to limited circulation or after being nationalized by the socialists. The access to independent media outlets became even more restricted when the United Nations Security Council imposed economic sanctions upon Yugoslavia in 1992. As a result of sanctions, inflation rates soared, increasing production costs for independent publications and leaving ‘only 8 per cent of Serbian families [that] could afford a daily paper’ (Gagnon Jr., 2004). Consequently, it is estimated that ‘69 per cent of the population relied on state television as their primary source of information, and that over 60 per cent watched the news program of state-owned RTS (Dnevnik)’. With no other news sources available or affordable, a vast majority of Yugoslav peoples were left with no choice but to rely on state TV.

In a report issued by the Institute of War and Peace Reporting (IWPR) titled Milosevic’s Propaganda War, it is noted that a parallel may be drawn between Milosevic’s propaganda campaigns – broadcasted through government-funded TV networks, such as RTS, and newspapers – and techniques used by Adolf Hitler, with the exception being that Milosevic had the additional power of television. The report goes back to the idea exploited by the Nazi party of ‘myths binding the masses together tightly’ and fear of the unknown as a tool for stirring violence between groups. Professor Renaud de la Brosse of the University of Reims, commented that Serbs, similarly, used a technique of ‘drawing on the sources of Serbian mystique, that of a people who were mistreated victims and martyrs of history, and that of Greater Serbia, indissolubly linked to the Orthodox religion’. Indeed, after the death of Tito, the Serbian Orthodox Church endorsed the violent tactics of the Milosevic regime, in hopes of encouraging ‘a shift from secular to religious approaches’ in public affairs. Priests and church officials were shown blessing Serbian soldiers before they went off to war in the 1990s, and public and private radio stations were used for releasing public proclamations of support by the Church of the wars in Bosnia, Croatia and Kosovo. The analysis by IWPR additionally highlighted the repetitive use of derogatory descriptions by the Serbian television and radio, such as ‘Ustase hordes’, ‘Vatican fascists’, ‘fundamental warriors of Jihad’ and ‘Albanian terrorists’, which soon became part of the common vocabulary in the media. Unverified stories were presented as facts and became common knowledge, such as a segment about ‘Bosnian Muslims feeding Serb children to animals in the Sarajevo zoo’ featured by a Serbian television network RTS. Such stories turned neighbours, friends, co-workers into ‘others’, further enforcing a concept of ‘us versus them’ to dehumanize familiar faces. Similar stories came from television networks from within Bosnia – a pro-Bosnian-Serb broadcaster Pale Television had once commented on evening news: ‘NATO forces used low-intensity nuclear weapons when they conducted airstrikes on Serb positions around Sarajevo, Gorazde and Majevica’. The announcer referred to Serbian examiners, who reportedly found signs of ‘contamination by radiation’ in Serbian residents of the areas. The statement was unreliable, as it was not corroborated by any other news outlets, yet it was still impactful as it provided further reasons for mutual fear and hatred.

Another objective of the Serbian-run media was keeping the arguments for war intact. Therefore, when the story of Maja Djokic, a 17-year-old girl of Serbian descent shot dead by a Serbian sniper in 1995 emerged, it quickly became the story of a Serbian girl, who was caught, raped and then killed by Muslims ‘as she attempted to escape to the Serb part of Sarajevo’. Djokic was only one of the many ‘rearranged’ stories, created to enforce the rhetoric behind Radovan Karadzic’s argument for war: ‘life with ‘muslim enemy’ and ‘the fundamentalists’ was impossible.’ To Karadzic and his followers, the Serbs who chose to stay in Sarajevo despite the siege were even worse than Muslims as they were ‘a living rebuttal’ to their argument. 

As the conflict progressed, international media responded to the atrocities in what became known as CNN effect: ‘use of shocking images of humanitarian crisis’ around the world compelling US policy makers to intervene in humanitarian situations they may not otherwise have an interest in’. Indeed, the coverage of the war by international news outlets, to a large extent, had contributed to the eventual resolution of the war by putting pressure on the international community to react. A Newsweek poll on the opinions of the American public regarding airstrikes noted a dramatic shift from 35 per cent to 53 per cent of support for intervention after images of a Serbian concentration camp were shown by British television network ITN. While relying on polls to make conclusions is inefficient due to inaccurate representation of opinion, the poll serves as proof of how impactful media can be in stirring public opinion. However, Nik Gowing, a British journalist, argues, that ‘media influence upon strategic decisions to intervene during a humanitarian crisis was comparatively rare, whilst tactical and cosmetic impact was more frequent.’ He had discovered that media reporting had the power to influence tactical decisions – like the creation of ‘safe areas’ such as Srebrenica or Goražde – or ‘limited airstrikes against Bosnian Serb nationalist artillery positions’. Gowing’s argument is more compelling, as it takes into consideration the nature of policy-making: decisions of the legislative branch are not dictated by the public opinion alone, and those decisions are not as straightforward as they may seem. 

Throughout the 1992-1995 conflict between Bosnian Serbs and Bosniaks, the different sides of the war employed a number of resources with the goal of stirring public opinion in their favour and exerting pressure on local and international communities. In Milosevic’s Serbia, mass media was nationalized in efforts of promoting fear and hatred towards other Yugoslav peoples, while international media used a tactic of repeatedly broadcasting violent images of the war as means of persuading the international community to work on a resolution. The media has therefore demonstrated it holds the power to equally, provoke and resolve conflict. But we have learned our lesson?


Televizija Srbija (RTS): Srpsku Decu Bacaju Lavovima. 2007, 

Cohen, Roger. “For Sarajevo Serbs, Grief Upon Grief”. Nytimes.Com, 1995, 

 Gilboa, Eytan. “The CNN Effect: The Search for a Communication Theory of International Relations” 2005

 “Dr Mark Thompson – UEA”. Uea.Ac.Uk, 2018, Accessed 29 Apr 2018.

 Ricchiardi, Sherry. “Confused Images: How The Media Fueled The Balkans War”. Quod.Lib.Umich.Edu, 2018,–confused-images-how-the-media-fueled-the-balkans-war?rgn=main;view=fulltext. 

 Fogg, Kent. “The Milošević Regime And The Manipulation Of The Serbian Media”. European Studies Conference, 2006, Accessed 9 May 2018.

V. P. Gagnon, Jr., The Myth of Ethnic War: Serbia and Croatia in the 1990s (Ithaca, New York: Cornell University Press, 2004), 112. 

Gordy, Eric D. The Culture Of Power In Serbia. Pennsylvania State University Press, 1999, pp. 65-66.

Armatta, Judith. “Milosevic’s Propaganda War”. Globalpolicy.Org, 2003, 
 Zajović, Staša, and Katie Mahuron. “Challenging The Growing Power Of The Serbian Orthodox Church In Public Life: The Case Of Women In Black-Serbia”. Awid.Org, 2018, 


The former Socialist Federal Republic of Yugoslavia,

Shadow Wars: Cold War Foreign Policy in Africa

Written by: Jack Bennett

The international political, economic and military landscape was chilled by the ongoing tensions between the USA and USSR during the Cold War. These hostilities contributed to the flaring of ‘hot conflicts’ through ‘proxy wars’ across Africa following the process of decolonisation during the latter half of the twentieth century. These declarations of diplomatic and military power created an arena in which the fundamental ideological dichotomy between democracy and communism could be fought out. Within this international climate the United States engaged in an exceptionalist foreign policy. This doctrine was based on the notion that the United States was internationally distinctive by upholding Enlightenment values of liberty, democracy and freedom, defining the nation’s mission to spread these foundational principles. As a result, American intelligence agencies played kingmakers across the African continent during the Cold War, financing and overseeing coups to install biddable rulers in an attempt to ward off the threat of communist encroachment.

At the opening of the decade in 1960, the Congo Crisis erupted following the declaration of independence. For five years, widespread violence and suppression of political and military opposition ensued under the nationalistic, communist-inspired leadership of Patrice Lumumba. The question of who controlled the southern region of the Congo was of particular diplomatic concern and conflict between The United States and Soviet Union, as it was a location rich in uranium deposits. As Lumumba resorted to Soviet military support in the systematic suppression of rebel factions, the CIA director Allen Dulles’ declaration that Lumumba was ‘a Castro or worse’ encapsulates the anxieties surrounding the United States’ ideological stance of exceptionalism during the Cold War. As a consequence, US finances secured the loyalty of Colonel Joseph-Desire Mobutu, whom the CIA believed to be ‘childish’ and easily led. Mobutu utilised American economic support, financing an army in order to expel the Soviets. Additionally, he detained Lumumba, who was murdered soon afterwards. Even the assassination of Lumumba in 1961, rumoured to have been conducted through the espionage-movie like use of poisoned toothpaste according to Kalb (1982), serves to highlight the exceptionalist autonomy asserted by the United States through these proxy shadow wars.

Declarations of independence followed decolonisation elsewhere in Africa, leading to further instability and other examples of proxy engagement by the United States and the Soviet Union. Examples include both the Ogaden War from 1977 to 1978, which was rooted in the ongoing political and social tensions surrounding the independence, and the partition of Somalia in 1960. In the context of the Cold War, the United States’ unwillingness to intervene on behalf of the Somali regime, President Carter’s lack of expediency in confronting communist aggression, and the Soviet victory prompting their invasion of Afghanistan in 1979 led to a gradual decline of détente between the United States and the Soviet Union. Additionally, the protracted Angolan Civil War from 1975 until 2002 further elucidates the interrelationship between domestic ethno-political divisions following decolonisation, and the ideological conflict underpinning the Cold War between capitalism and communism. We can therefore see the concept of exceptionalism greatly influencing the United States’ foreign policy, both in their attempt to transplant democratic frameworks onto newly independent nations undergoing conflict-ridden processes of decolonisation, and in their simultaneous prevention of the spread of communism into these vulnerable, developing states.

However, both international superpowers tended to suborn local strongmen with military backgrounds and authoritarian instincts, whether or not these dictators had any ideological commitment to communism or capitalist democracy. Turse (2015) argues that the actions of the United States during this time period only produced chaos and destabilisation of the region. They were motivated by the economic advantages seen in developing diplomatic ties with newly independent African states, as opposed to the idealist vision of democratisation. Furthermore, following the Soviet Union’s support of General Nasser in Egypt in 1955, the US was convinced that ‘democratic’ Africa was fragile and prepared to embrace authoritarian but reliable alternatives by 1958. This reveals the limitations in underpinning the pursuit and support of proxy conflicts by the United States’ with exceptionalist ideologies during this time period. For example, the US Secretary of State, John F. Dulles, argued that it was imperative for America “to fill the vacuum of power which the British filled for a century”. The US, therefore, welcomed the rule of General Ibrahim Abboud, who had in November 1958 seized power in recently independent Sudan, declaring himself an enemy of communism and Nasser. Through these foreign policy manoeuvrings and strategies of supporting anti-communist groups and resistance movements in recently decolonised African states, the United States’ aimed to politically, economically and militarily ‘roll back’ the global encroaching influence of the Soviet Union in an attempt to end the Cold War. Even if that meant adopting a neo-imperialist and hegemonic projection of diplomatic and military power.

During the proxy rivalry in which Africa was embroiled over the next 30 years, the concept of American exceptionalism clearly prevailed in determining this geographic strategy in the political, ideology and economic projection of power during the Cold War. Despite no direct military engagement between the US and USSR, the two superpowers clashed through their respective support of opposing regimes. Ultimately, it can be argued that America came out of these proxy wars victorious, asserting their dominance in the face of Soviet expansionist efforts. With the ideological influence of exceptionalism shaping the foreign policy actions pursued in Africa during the 1960s, the USA fundamentally aimed to spread American concepts of liberty, freedom and democracy globally at a time of political division and opposition to communism. However, it is important to consider the ramifications for Africa as a continent left to pick up the pieces after decades of political and social turmoil. The development and proliferation of corrupt dictatorships, civil wars, environmental destruction, social turmoil and economic instability clearly define the Cold War’s lasting legacy.


Ambrose S. and D. Brinkley, Rise to Globalism: American Foreign Policy Since 1938. London: Penguin, 2012.

Dearborn, J. A.  Exceptionalist-in-Chief: Presidents, American Exceptionalism, and U.S. Foreign Policy Since 1897. Mansfield: University of Connecticut, 2013.

Hollington, K. Wolves, Jackals and Foxes: The Assassins Who Changed History. New York: Thomas Dunne Books, 2007.

James, L. ‘Africa’s Proxy Cold War’, BBC World Histories, Issue 3, April/May.

Kalb, M. G. The Congo Cables: The Cold War in Africa – from Eisenhower to Kennedy, Macmillan, 1982.

Madsen, D. L. American Exceptionalism. Edinburgh: Edinburgh University Press, 1998

Naimark, N. ‘Becoming Global, Becoming National’ in N. Naimark, S. Pons, & S. Quinn-Judge (Eds.), The Cambridge History of Communism. Cambridge: Cambridge University Press. 2017.Turse, N. Tomorrow’s Battlefield: U.S. Proxy Wars and Secret Ops in Africa, Chicago: Haymarket Books, 2015.

Image: Somalian troops,

The Lost Cimabue: Reflections on a Medieval Master

Written by: Tristan Craig

‘Woman discovers Renaissance masterpiece in kitchen,’ declares The Guardian on 24 September 2019. This was announced upon the surfacing of a rare painting by thirteenth-century Florentine artist, Cimabue, in the home of an elderly woman in northern France. Christ Mocked, one of only eleven known wood panel paintings attributed to the artist, was found hung inconspicuously above the stove of the anonymous woman’s home. Here it had remained – unassumingly and undisturbed – for many years. The arrival of this artwork into public knowledge has garnered much intrigue; however, there remains a great deal of mystery surrounding this discovery – especially how it came to be hung on the wall of a kitchen in Compiegne. I hope in this article to shed a little more light on this once lauded artist. His fall from celebrity has seen him almost erased from the artistic canon in contemporary scholarship, and I write this article in the hopes that this exciting discovery will help restore some of his artistic legacy.

Cimabue was born Bencivieni di Pepo in 1240 in Florence, Italy. Both a painter and mosaicist, he was credited by Giorgio Vasari in his seminal 1550 text, The Lives of the Most Excellent Painters, Sculptors, and Architects as being the artist ‘who spread first light upon the art of painting’. This accolade helped to cement Cimabue’s reputation as a forerunner in propelling the Italo-Byzantine style forward, which in itself was heralded as a welcome return to the high artistic style of Classical antiquity. He was amongst the first to explore perspective and naturalism in painting. He especially focused on enhancing the largely stylised iconography of his predecessors, with such examples found in the frescoes of The Basilica of Saint Francis of Assisi. Whilst many have been extensively damaged over the centuries, the surviving frescoes reveal his fantastically developed style and skill for figurative depiction.

A Changing Fashion

It would be inappropriate to discuss the life and work of Cimabue without giving mention to the artist – of arguably greater renown – believed in some scholarly circles to have been taught by the master, Giotto di Bondone. Giotto received great fame both as a painter of frescoes, a number of which adorn the walls of The Basilica of Saint Francis of Assisi alongside Cimabue, and as an architect. His artistic talent earned him the title of ‘caput magister’, or ‘Headmaster’, at Florence Cathedral with his accomplishments including the design of the campanile in 1334. Why Giotto has come to overshadow Cimabue is largely due to his enormous renown in life rather than the mastery of his craft – something immortalised by Dante in his epic Divine Comedy. In Purgatorio, Canto XI, he introduces the Italian painter, Oderisi da Gubbio, who laments the fleeting fame of artists, remarking that ‘Cimabue thought / To lord it over painting’s field; and now / The cry is Giotto’s, and his name eclips’d’. With a declining reputation amongst his contemporaries and with so few artworks comparative to Giotto, it is perhaps of little wonder why Cimabue receives such limited recognition today.

It will take some time and several expert opinions before we are able to say with certainty whether or not this is indeed the work of the medieval Florentine artist. Additionally, it is made only more difficult by how few of his works have survived. Whilst there is little doubt that the artwork bears an unmistakable resemblance in style and content to other Cimabue wood panels, a more nuanced analysis will be required before it can be considered indubitable and not the work of a follower. How it came to end up in the ownership of this family – who believed the artwork to be a substantially less valuable Russian icon painting – is perhaps more difficult to discern and will invite a great deal of speculation. Mirroring the 2014 discovery of a painting by Caravaggio in the attic of an apartment in Toulouse, there are a number of theories as to how the Cimabue could have come to rest on a kitchen wall. The socio-economic upheaval of revolutionary France, coupled with a growing veneration of classicism as the epitome of ‘high art’, would have seen a medieval artist such as Cimabue fall out of favour. The lack of a signature on the artwork aided its fall into obscurity – and into the hands of an unwitting dealer.

Restoring an Artistic Legacy

If experts are able to confidently agree that this ‘tatty old artwork’ – as coined by Metro News Online – is indeed a Cimabue, then this discovery is an immensely important one. Whilst Cimabue might not be the household name that Caravaggio or Giotto are, the fact that so few of his works exist adds great significance to this find. His crucial role in art history at the transition from the homogeneity of iconography to the elaborate sensibilities of the Renaissance, ought not to be neglected. Yet, Cimabue is repeatedly omitted from prominent academic works. Perhaps this is due primarily to a lack of attributable works, or perhaps the debate surrounding the validity of Vasari’s account has further diminished his reputation as the teacher of Giotto. Perhaps Giotto simply propelled the Byzantine style further forward than Cimabue ever did, often exemplified by a comparison of the Santa Trinita Madonna of Cimabue and Giotto’s Ognissanti Madonna. Let us set aside the writing of Vasari and Dante, and the questionable manner in which the press has chosen to relate this find (choosing to erroneously refer to the late medieval artist as ‘Renaissance’ – but therein lies another article). Instead, let us focus on restoring the reputation of Cimabue to artistic canon and how the discovery of an unassuming icon painting in a small French kitchen might just help us do that.

Image: Christ Mocked, Cimabue. Image from

Ancient Myths Retold

Written by: Lisa Doyle

Myths from the past permeate modern society and culture to an extent that most people are not aware of. When using the word ‘mythology’, I am, in fact, referring to stories. Stories that have been told and re-told across generations. The mythological stories of Ancient Greece are the ultimate examples. Of course, many people are aware of Zeus, the god of thunder, and of heroes such as Heracles and Jason. What may not be as apparent, however, is the fact that these narratives are still being told in some shape or form today. This is the purpose of mythology, that tales are retold again and again, and are very much a part of a process of reiteration.

There are some issues that become evident when one reads Greek mythology, and those issues are namely ethical. The misogyny in these stories is plentiful. But that has changed in modern retellings of these tales, and the way women are treated in these narratives has been altered. Literary examples include Colm Tóibín’s House of Names, in which he recounts the myths of the House of Atreus, specifically allowing us to read Clytemnestra’s perspective. Madeline Miller’s Circe also foregrounds a female mythological character as we come to understand the legitimate reason for Circe’s eccentric behaviour (if that is the appropriate word for someone who enjoys turning men into animals). The explanation provided by Miller, that Circe is a victim of sexual assault, is nowhere alluded to in the Odyssey; the epic poem which tells her story.

Multiple examples of mythological influences also abound from the world of film. They include O Brother, Where Art Thou?, a film by the Coen Brothers based on the myth of Odysseus from Homer’s Odyssey, and of course ‘Thanos’, the memorable villain from  the Marvel universe. Thanos, a character fixated on death, is based on ‘Thanatos’, the Greek personification of death. Finally, the Spike Lee film Chi-Raq is based on a play from the fifth century BC – Aristophanes’ Lysistrata – in which the all-female protagonists decide to  withhold sex from the men of the city in an effort to stop their participation in an ongoing war.

All of this leads me to Stephen Fry. He has published two volumes on Greek mythology in recent years, Mythos and Heroes. Fry’s work is both entertaining and important for a number of reasons. Throughout his book he constantly reminds the reader that it is in fact acceptable to be confused by the hoard of Greek names which these myths present. As Danny DeVito says in Hercules, ‘Odysseus, Perseus, Theseus- a lot of ‘euses’. By doing so, Fry is making Greek mythology more accessible, and less intimidating, to the reader. As he narrates these myths, he uses a casual, conversational approach, and the characters in his book adapt the same manner. As well as including helpful footnotes to orient the reader and explain etymology, a healthy injection of humour makes the reading of these books all the more enjoyable. Notable examples include the goddess Hera telling Zeus that he has a string of drool dribbling from his chin onto his lap, and Heracles referring to his failed attempt to retrieve the golden apples of the Hesperides as ‘fruitless’.  In the volume Heroes, Fry does an excellent job of rendering the personalities of heroes like Jason, Heracles and Bellerophon with such characteristics as to make them much more relatable and familiar to the modern reader. As we recognise qualities in the obstinate youths on the page, we become more connected with the story.

It is a remarkable achievement to produce a piece of work both accessible to those who are reading myths for the first time, and enjoyable to those who study Classics. Classics can be quite a ‘stuffy’ subject, so it is of paramount importance that we continue to relate these stories, which form the very foundations of Western culture, in an approachable and inspiring manner. As Stephen Fry writes in his introduction to Mythos, his focus is not on explaining these stories but telling them.

Image: Photograph of Stephen Fry,