The Pianist from Syria by Aeham Ahmad

Written by: Kvitka Perehinets.

Written by a second-generation Palestinian refugee, The Pianist from Syria offers a detailed account of the life of a musician growing up in an unofficial refugee camp in Yarmouk before and after the outbreak of the Syrian war. 

The first half of the book takes the reader on a journey through Ahmad’s childhood growing up in a Palestinian refugee camp of 160,000. With his father a blind violinist, carpenter, and craftsman of musical instruments, much of the author’s youth was strongly influenced by music. Aeham’s recollection of his rebelliousness, often demonstrated in his tendency to skip school only to lock himself away in the back of his father’s shop to practice piano, is intricately intertwined with narrative on Yarmouk life. Ahmad successfully paints a colourful picture of the neighborhood, its residents, and the culture and traditions that make Yarmouk feel like a bustling, tight-knit community. Throughout the book, it is often referred to as a “camp,” despite it having long been part of Damascus. Many settled there purely because they had nowhere else to go, although many had not been granted Syrian citizenship. The description Ahmad provides of his life before the war makes the second part of the book all the more tragic: picturing the siege of Yarmouk, families living off water with cinnamon and children being shot in the streets against the backdrop of the happy, relatively untroubled childhood described several pages before leaves the reader with a sense of hopelessness. 

Ahmad does not spare details: when Yarmouk becomes a pivotal location for fighting between the Syrian government and rebel forces, the reader is fully immersed in the despair and anguish of the situation. Descriptive accounts of people lining up to receive aid packages, life under the constant danger of sniper fire and the anxiety surrounding the process of going through checkpoints where young men were picked out at random and arrested at any time also show this. Yet, despite the complicated nature of the politics at hand, Ahmad does a brilliant job at making it understandable for the reader while effectively communicating the sheer brutality of the Assad regime and the historical background of the developing conflict. 

The Pianist from Syria is a story of heartbreak, survivors’ guilt and anger, but it is also a story of hope, strength and faith. It is a reminder that daily life can quickly change dramatically: buildings to rubble, feasts to cinnamon water, families to individuals – reduced and changed all within a couple of months, regardless of whether you are rich or poor. One might say that the ease with which things have changed, stands in stark contrast to how complicated the situation had truly become at the end. 

Aeham Ahmad’s voice provides a most sobering read for those who seek a more personal, intimate account of one of the world’s most devastating conflicts, while receiving a share of historical background to it at the same time.

Railways, Race, and Lions – The Tale of the Tsavo Man-Eaters

Written by: Lewis Twiby.

The Uganda Railway appeared to be one of the best examples of imperial negligence by the British Empire. Quickly called the ‘Lunatic Express’ by contemporaries for its high cost to build (over £5.5million), and its apparent leading to ‘nowhere’. British imperialists claimed that the railway was required to secure the East African Protectorate, now modern Kenya, as it would prevent other European empires from moving into the area and constructing dam projects which would impact Britain’s access to Egypt and consequently India. So, from Mombasa on the coast to the Kingdom of Buganda along the shores of Lake Victoria, a railway stretching 700 miles began construction in 1896. Despite successive disasters it was finally completed in 1901, but the cost of running it meant it was mostly abandoned by 1929. However, one of the big disasters to strike the railway was at the Tsavo River where two lions killed around thirty workers. From March to December 1898 the infamous ‘Tsavo man-eaters’ preyed on the workers, and the story of them has inspired countless narratives and movies – most famously the movie The Ghost and the Darkness (1996) starring Val Kilmer and Michael Douglas. The story of the man-eaters offers an insight into labour and colonialism in East Africa.

     The construction of the railway offers three different accounts, largely depending on race. The first, is the African viewpoint. The railway cut through the land of various ethnic groups including the Kikuyu, Maasai, Kamba, and Luo. Kenya would later be known as the ‘White Man’s Country’ thanks to its white settler population, and their settlement in the Rift Valley was first opened up by the railway. Key imperialist Frederick Lugard recorded the fertility of the soil, ‘with excellent and luxurious pasture throughout the year’ in 1893, which offered prime farming land for a settler population. During the construction of the railway the local communities were forcibly evicted from their land, which later allowed farmers to claim these ‘empty’ lands. The forcible arrival of British industrialism created a new economic system for Africans to become part of. Some communities became labourers to help build the railway for the British, however, as they were few in number, the Ugandan Railway Company had to rely on alternate sources of labour. This brings us the second account, that of South Asians.

     Even though Britain abolished slavery in its empire in 1833 this did not end slavery. Instead, it was recast as a new system called ‘indentured servitude’. Indians were hired on contracts, in the case of East Africa these lasted five years, where they worked for the time allotted on the contract, and at the end they would get paid. However, this was an excuse to utilise slave labour – employees could not leave the contract, corporal punishment was allowed, and many people were worked to death. Indians, primarily from poorer regions, were put onto these contracts, and were sent to work in Britain’s far-flung empire. An Indian diaspora was formed across the world ranging from the Caribbean, to Fiji, to South Africa, and to Mauritius. Colonial administrators became frustrated at Africans resisting work, and, although Indians would also resist the backbreaking work, they were used, as they lacked the ties to local communities. Over 19,000 people from the Punjab, Sind, and North Western Provinces (today’s Uttar Pradesh) were sent to work on the railway – Hugh Tinker estimates that 7 per cent of them died and a fifth were declared ‘invalid’ upon returning. Ironically, British abolitionists had championed the railway as being a way, in the words of The Anti-Slavery Reporter, to engage in the ‘suppression…of the slave trade,’ despite actively engaging in slavery.

     Finally, we have the view of the colonisers. Alongside the desire to secure Britain’s imperial holdings the paternalistic view towards the colonised was very much in evidence. As already mentioned, abolitionists viewed British expansion into the region as a way to stop ‘petty tribes’ from exploiting their ‘weaker neighbours’. The ‘White Man’s Burden’ was regularly used to justify colonial expansion – colonised peoples had to be ‘civilised’ by the guiding hands of Europeans. However, there is a stark hypocrisy in this narrative – colonisation regularly entrenched ‘regressive’ traditions which colonisers argued they were combatting. The Anti-Slavery Reporter is a prime example of this. While stating that the Uganda Railway could be used to end slavery in East Africa and admitting that indentured servitude could lead to ‘very grave evils’, they argued that it ‘affords an inducement to the men to do their best’. Kenya soon became a colony where white Europeans could lead a life of aristocratic pleasure at the expense of the non-white population. In particular, big game hunting became a popular pastime and famous hunters, including Theodore Roosevelt, visited to hunt animals. John Henry Patterson, hired to oversee the construction of the railway at Tsavo, was an avid hunter, and his account of the man-eaters shows this. He gives paternalistic descriptions of Africans on one page, and on the next boasts how he ‘was especially anxious to bag a hippopotamus.’ This brings us to the man-eaters.

     Two male lions would hunt workers along the Tsavo River, and Patterson’s account would greatly mythologise them. Originally, he claimed that ‘they had devoured between them no less than twenty-eight Indian coolies, in addition to scores of unfortunate African natives’, but later he would claim that they killed over a hundred people. The man-eaters became part of Kenya’s legend – it was the land where lions ate men so skilled hunters could prove their worth. Research by zoologists have shown that the lions started eating humans due to one of them having a damaged tooth preventing them from hunting their traditional prey – Patterson later said that one of his bullets caused this damage. Patterson’s credibility as the fearless, white hunter would become dented if the infamous ‘man-eaters’ only ate humans as they were injured. Exploitation is also a key reason why they started eating humans. Slave trading in the region regularly left corpses due to brutal conditions, and their regular prey started to dwindle. Railway construction bisected habitats cutting the lions off of their traditional prey, and a ‘rinderpest’ outbreak wiped out local populations of buffalo, warthogs, and antelope. This outbreak happened as cattle from India was imported to Africa believing it could help ‘acclimatise’ the South Asian labourers, but it had a consequence of spreading disease to the local wildlife. Humans became an optional food source due to this.

     Race and labour practices were a key reason why the lions managed to kill as many people as they did. The Uganda Railway Company wanted to maximise profits regardless of the human cost, and, buoyed by the idea that it was protecting the empire and preventing slavery, labour protection was ignored. The Indian government had been trying since 1890 to protect labourers abroad, but underhand negotiation allowed over 1,300 people to leave Karachi before anyone in India could check if they were even healthy enough to work. Africans and Indians were forced to work long hours with the promise of pay in the future, and in the heat of East Africa this raised mortality rates. Samuel Ruchmen has emphasised that papers reported tales of clashes with ‘African tribes’ and the lions, but ignored the thousands to die from hard labour and disease. Doctor John Brock in 1899 tried to convince the company to introduce vegetables to food rations due to the outbreak of scurvy, but his report was rebuffed as it would cost too much. Most workers also had to sleep in tents, something which is very flimsy when in contact with the claws of a lion. Meanwhile, overseers and managers had the luxury of medical aid, food, shade, and the protection of actual buildings or guards – it is no coincidence that these figures were white Europeans. Lion attacks soon fell along racial lines.

     Patterson arrived along with the workers in March 1898 at the Tsavo River, and already the impoverished and overworked Africans and Indians would have poor working conditions. As to what decimated workers in Panama and Suez constructing their canals, yellow fever and malaria struck down many people. The tsetse fly also spread sleeping sickness, something made worse as rinderpest had killed off most of the fly’s regular hosts. During the night the lions would sneak into camps and drag people from their tents – as the months went on the lions grew braver and both would venture into camp to claim a victim each. Desperate, workers lit campfires to scare off the lions and constructed fences made of thorn bushes to hope it could deter the lions – both failed. These attempts to scare off the lions made disease worse – fires attracted malaria-carrying mosquitoes and tsetse flies made their home in thorn bushes. The racial hierarchy of work meant that only non-white individuals were killed – safe in fortified areas Europeans escaped the lions. The district officer was nearly killed, but that was more due to the fact that he almost ran into one of the lions at the train station, rather than the lion stalking him. The attitudes to this also differed based on race.

     As argued by Harriet Ritvo, hunting lions became a metaphor for the domination of Africa – hunting the ‘King of the Jungle’ showed mastery over the land, and therefore the people. Patterson chastised the ‘coolies’ for being fearful of the wildlife as ‘they were sure it was a lion’ – in this context actually a valid fear. However, as Patterson did not face being eaten, valid fears were seen as evidence of Indians being ‘never remarkable for bravery’. In a callous remark he even mocked the attempts they made to avoid lions at night, including trying to erect tents on water-tanks, trees, and roofs. In the end, it was the workers who forced Patterson to properly act. Small-scale strikes regularly occurred on the railway – one in 1900 was reported by The Times of India over poor work conditions and a lack of access to medicine. Conspicuously absent from Patterson’s account labourers brought work to a standstill until the lions were dealt with by the end of 1898 – keen to keep his image intact he claimed that they gave him a bowl engraved with ‘Hindustani’ words to show their thanks. 

     Two lions driven to hunting humans became legends in Kenya’s history. The promotion of the colony in later years as one for the ‘white man’ turned them into a daring tale of man conquering nature. However, this narrative obscures the multifaceted nature of colonialism. A railway built for imperial prestige changed the landscape, brought a weakened and enslaved population to East Africa, and created animal attacks which fell along racial lines. It also shows another aspect overlooked. African and Indian labourers suffered thanks to the lion attacks, but they also were the ones who put pressure on Patterson to make a concerted effort to hunt down the lions. These workers managed to halt imperial expansion – a rare thing overlooked in the history of the British Empire.

Bibliography

‘The Uganda Railway’, The Anti-Slavery Reporter, 19:3, (1899), 137-139

‘The Uganda Railway’, The Times of India, (23/01/1899), 4

‘Uganda Railway Coolies: Some Alleged Grievances, “The Uganda Railway Strike”’, The Times of India, (20/09/1900), 6

Hill, M.F., Permanent Way: The Story of the Kenya and Uganda Railway, (Nairobi: 1949)

Meredith, M., The Fortunes of Africa: A 5,000 Year History of Wealth, Greed, and Endeavour, (London: 2014)

Patterson, J.H., The Man-Eaters of Tsavo, and other East African Adventures, (London: 1907)

Ruchman, S., ‘Colonial Construction: Labor Practices and Precedents Along the Uganda Railway, 1893-1903’, International Journal of African Historical Studies, 50:2, (2017), 251-273

Tamasula, G.A., ‘The Lions of Tsavo: Man-Made Man-Eaters’, Western Humanities Review, 68:1, (2014), 195-200

Tinker, T., A New System of Slavery: The Export of Indian Labour Overseas, 1830-1920, (Oxford: 1974)

‘Out of the Barbershop and into the Future’: Modern Medicine of New York City in 1900

Written by: Jack Bennett.

Having recently watched the period medical drama The Knick, from Academy Award winning director Steven Soderbergh, set during the Gilded Age of American history, the series encompasses the medical advancements, contemporary racial tensions found in both medical treatment and wider society, and the tumultuous political climate of America. Providing a window through which the harsh reality of illness and incurability on the wards of The Knick is revealed, mirroring the trichotomous nature of corruption, consumption and capitalism in the tension ridden socio-political environment of New York City and the United States at the turn of the twentieth century. This article will explore the historical depiction of medicine and the socio-political landscape of the USA in 1900 through a synthesis of historical criticism. 

Set within a fictionalised Knickerbocker Hospital in Lower Manhattan in 1900, Dr John Thackery (Clive Owen), the main protagonist of the show, becomes a tragic hero, plagued by his own cocaine addiction and amidst the noble pursuit of developing medical practices to save lives across society is loosely inspired by the drug-addict, medical pioneer Dr William Stewart Halstead. While, Dr Algernon Edwards (André Holland) is another fictional character who is possibly an amalgam of two notable doctors – Daniel Hale Williams and Louis T. Wright. Williams set up Chicago’s first non-segregated hospital and was the first African American to be admitted to the American College of Surgeons. Wright, a Harvard graduate, was the first African American doctor to work as a surgeon in a non-segregated hospital, at Harlem Hospital in New York City. Critically, the show conveys the sense of immediacy and rapidity of medical development at this time. For example, between 1880 and 1890, approximately 100 new types of operations were conceived, made possible by progress in anaesthetics and antisepsis, discovered in the latter part of the nineteenth century. Meanwhile, through the shows creation of 1900 Manhattan, combining the narratives of corrupt political officials, criminals, academics, and exploited immigrants, women and African Americans, The Knick assumes greater intentions through the creation of a rich tapestry of cultural characters, placing agency in the individuals and communities ordinarily subjugated during this period. 

The dimly lit interiors provide the backdrop to the domestic tensions and relations which reflect the wider developments within New York at the dawn of a new century. Foremost amongst these central themes is the shows depiction of racial injustices and relations at the nadir of racial policies in America at this time. Like its television counterpart, the Knickerbocker hospital had a policy of refusing to treat African Americans at the beginning of the twentieth century. revealing both implicit and explicit forms of discrimination, along with the dichotomy between the boundaries and limitations to African American social mobility as well as increasing degrees of interracial communication, collaboration and acceptance throughout society. Moreover, the position of multivariate immigrant populations in the United States during this period is expertly handled by Soderbergh throughout the two seasons of the show. Major shifts in the sources of immigration during the 1890s occurred, with 3.5 million newcomers entered the USA during this decade despite the onset of economic depression from 1892. Over 50% of which were from new immigration sources in Southern and Eastern Europe, such as Catholics from Italy and Jewish populations from Poland, residing alongside nativist communities with Irish, German and British decent. This produced an inherently global nation and melting pot of culture, along with resulting conflicts. With this influx, The Knick illuminates the reality of fin de siècle New York, as one in which immigrants come believing the promises but far too many do not survive the realities long. Constructing this reality was a political machine system beset by corruption and challenges for authority between established sources of wealth and power and the emergence of a Progressive era, with greater Democratic Political support and an increase in the pace of social development. For example, The Knick constructs a range of female characters, such as the hospital’s reform-minded patron, Cornelia Robertson (Juliet Rylance), illustrating the increasing socio-political mobility of women at the turn of the century, assuming roles as medical practitioners, nurses and socio-political reformers within New York City. 

Medical historians, however, such as Howard Markel and Peter Kernahan have raised the issue of historical inaccuracies in The Knick. Markel, for instance, argues that the series conflated the medical historical developments which took place from the 1870s and into the early decades of the twentieth century, akin to ‘conflating the Middle Ages to colonial America to the Civil War’. While Kernahan determined a multitude of historical inaccuracies, in particular, the trade of cadavers for medical experimentation within the hospital, which would have unlikely taken place in New York in 1900. Anatomy Acts were passed in the mid-eighteenth century, preventing the practice of grave robbing for dissection. Despite these historical flaws, The Knick achieves an intricately written narrative of historical change through Soderberg’s uniquely active and engaging cinematographic form.

Nevertheless, The Knick reveals the perils of progress, straddling both medical modernity and static tradition. Whilst also depicting the vices and iniquities of early twentieth century society, it demonstrates a sense of unforgiving anti-romanticism. It creates a graphic, vividly detailed depiction of a shocking house of horror in the pursuit of modern ascendancy, despite the obvious side effects and creation of inequalities along this journey. Particularly, the elevation within the period drama of the surgeons as almost deified figures, balancing innocent with the suspenseful and graphically portrayed progressive and experimental medical procedures. But these individuals must experience their peripeteia, whether through hubris or dualism, pursuing the unsustainable double lives and moral compromises. This relates to Charles Rosenberg’s 1971 call for a “new emphasis” in the history of medicine, moving beyond the focus on the intellectual life of physicians to their activities as healers and as members of a profession. Medical historians, Rosenberg attested, were required to place medicine in its socio-cultural context and to explore the ways in which socio-economic factors might have influenced medical developments. This brought into the historical framework the increased authoritative roles of both African Americans and women at the time of widespread disenfranchisement, segregation and the propagation of second-class citizenship across the United States. By delving into an often-overlooked period of history, the medical drama experiences a fresh revitalisation. Arguably, however, the show is instilled with a historically revisionist approach and agenda, by exploring a field shaped and dominated by white men, adding contributions by African-Americans and women.

The Knick, by blending both seemingly gothic elements and unflinching medical realism, attempts to achieve dramatic historical accuracy, but is best appreciated and viewed as a pastiche of fact and fiction. Therefore, it allows for the reality of progressiveness and the sense of a new age dawning to be explored through the character developments of intriguing and complicated individuals and communities. Revealing the direct and intimate interplay between medical science and its social environment, becoming co-dependent of one another consequently during this transformative period. What The Knick fundamentally captures in its exquisite visual form, aesthetic and historical context is the fraught, unequal and transformative birth of modern medicine in the United States at the turn of the twentieth century. 

Bibliography

Image Source: 

Stanley B. Burns, MD/Burns Archive, found in ‘The Cocaine, the Blood, the Body Count’, The New York Times, August 1, 2014, https://www.nytimes.com/2014/08/03/arts/television/modern-medicine-circa-1900-in-soderberghs-the-knick.html?auth=login-google. Accessed on 11 January 2020. 

The Knick (Series 1 and 2). Directed by Steven Soderbergh. 2014–15. HBO Cinemax

Adams, Peter. ‘Modern Medicine Had to Start Somewhere’. Health and History 18, no. 1 (2016): 174-79.

Deng, Boer. ‘How Accurate Is The Knick’s Take on Medical History?’, Slate, August 08, 2014, https://slate.com/culture/2014/08/the-knick-true-story-fact-checking-medical-history-on-the-cinemax-show-from-steven-soderbergh.html. Accessed on 11 January 2020. 

James, Nick. ‘Further notes on The Knick’, Sight and Sound Magazine, 29 January 2015, https://www.bfi.org.uk/news-opinion/sight-sound-magazine/reviews-recommendations/further-notes-knick. Accessed on 14 January 2020

Kernahan, Peter J. ‘“A Condition of Development”: Muckrakers, Surgeons, and Hospitals, 1890−1920’, Journal of the American College of Surgeons, 206, 2, (2008): 376 – 384

Labuza, Peter. ‘Shock treatment: The Knick’, Sight and Sound Magazine, 29 January 2015, https://www.bfi.org.uk/news-opinion/sight-sound-magazine/features/shock-treatment-knick. Accessed on 14 January 2020

Schuessler, Jennifer. ‘The Cocaine, the Blood, the Body Count’, The New York Times, August 1, 2014

https://www.nytimes.com/2014/08/03/arts/television/modern-medicine-circa-1900-in-soderberghs-the-knick.html. Accessed on 11 January 2020.
Stanley, Alessandra. ‘No Leeches, No Rusty Saw, But Hell Nonetheless’, The New York Times, August 7, 2014, https://www.nytimes.com/2014/08/08/arts/television/the-knick-a-cinemax-medical-drama-set-in-1900.html. Accessed on 11 January 2020.

The Writing on the Wall: The Perilous Future of Historical Sites and Monuments

Written by: Tristan Craig.

In March 2014, officials in the Huairou District of Beijing announced their intention to certify part of the Great Wall of China a ‘graffiti zone’, formally allowing individuals to freely etch their names into the millennia old fortification. Located within the Mutianyu section – one of the best preserved areas of the wall and particularly popular with tourists – the controversial decision was made in an effort to abate increasing destruction by overzealous visitors. Authorities claimed this drastic step was a necessary one, arguing that patrolling officers and warning signs across the 5500 mile long structure had already proven ineffective, and that granting people permission to inscribe their messages on a small portion of the wall would lure them away from defacing it elsewhere. 

The Great Wall of China, however, is not unique in its suffering. Just three months after the dedication of the graffiti zone in Beijing, a part of the parapet on the historic Pont des Arts bridge in Paris (originally constructed during the reign of Napoleon Bonaparte in the nineteenth century before being rebuilt in the 1980s) collapsed under the weight of padlocks attached to its railings. In a tradition where amorous couples seek to immortalise their devotion by marking their initials on a ‘love lock’ before attaching it to a bridge or other railing, the literal weight of this act proved costly. Subsequently, some forty-five tonnes of padlocks were removed from the Pont des Arts and destroyed. Locals in Verona, Italy are similarly bearing the burden of sentimental tradition. The courtyard of the Casa di Giulietta, believed to belong to William Shakespeare’s eponymous – and fictitious – heroine in Romeo and Juliet has become awash with scribblings, engravings and paper notes tacked to the walls with chewing gum (despite the infamous balcony from which she delivers her soliloquy being a twentieth century addition). The response from local authorities in this instance was to provide removable boards to satiate the desire of visitors to commemorate their love and to prevent further damage to neighbouring properties.

Human defilement is not the only factor at work in the destruction of outdoor landmarks, however. Erosion caused by weather conditions and natural degradation over time is a constant threat to their conservation. How best to preserve them whilst maintaining the integrity of the original structure has often proven a contentious issue. The Pictish standing stones of medieval Scotland exemplify this struggle, as their pictorial and ogham inscriptions are gradually worn by the harsh climate. In Forres, on the Moray coast, the monolithic ninth-century Sueno’s Stone now stands interred behind glass panels to prevent further damage, although remains on the site where it was first erected. The Anglo-Saxon Ruthwell Cross, which once stood in the yard of the church after which it is named, is now housed inside the building in an apse specially constructed for it in 1887. Whilst sheltering the cross – which dates to the eighth century – from the elements has ensured its survival, in doing so it no longer serves the same purpose for which it was originally created. What once was an object to be observed by passing laity – an ecclesiastical monument to the omnipotence of God and a beacon for travelling pilgrims – is now an artefact of historical religious significance. Whilst the artistry of the cross can still be admired by those who enter Ruthwell Church today, its context in the annals of medieval Christianity ought not to be disregarded.

Preserving and restoring structures subject to elemental deterioration presents a plethora of issues to conservationists, something which is only exacerbated by sites which benefit greatly from the tourist trade. Drawing new swathes of visitors to areas on occasion serves as the driving force in restoring ancient monuments but becomes problematic when done so to an inadequate standard. Addressing cosmetic deterioration on a merely superficial level often fails to fully address the source of the decay or does so in a manner unsympathetic to the original architecture. In 2016, the Great Wall of China once again hit headlines due to what individuals, including the chairman of the China Great Wall Society, viewed as an incredibly unsympathetic repair on a mile long stretch of it however official plans were announced in 2019 to reinstate the wall and prevent further irreversible damage.

Needless to say, preserving a monument as grand in scale as the Great Wall of China is not quite as simple as installing plywood panels along its ramparts or placing it behind armoured glass sheets. The dedication of the graffiti zone may be seen as the best possible course of action in what has become an increasingly dire situation. The most drastic course of action – preventing tourist access to the well-travelled sections of wall – would prove a gravely arduous and immensely costly task given the economic boost provided by tourist commerce, although graffiti is but one problem threatening the future of the structure. An estimated 30% of the Great Wall built by the ruling Ming dynasty has been lost due to natural erosion and the theft of bricks by tourists and locals alike. Although patrols continue along the route, maintaining 5500 miles of fortification visited annually by an estimated eleven million visitors in the most popular sections alone is a complex issue.

On occasion, however, the inscriptions left behind by figures eminent in their own right may actually serve in drawing visitors to an area. A signature purported to belong to Romantic poet Lord Byron on a column of the Temple of Poseidon at Cape Sounion in Attica draws as many literary admirers as it does classicists. It would malapropos to suggest, however, that making an allowance for this particular piece of history serves to encourage the public debasement of ancient monuments. The desire for people to leave their mark on sites of great provenance is all too common an issue facing conservationists and whilst individuals may do so ignorant to the consequences of their vandalism, it is a problem needing addressed on a global scale before historical sites are irreversibly damaged – or lost entirely.

BIBLIOGRAPHY

Aitken-Burt, Laura. “Rewriting History.” History Today, January, 2020.

Chance, Stephen. “The Politics of Restoration.” The Architectural Review 196, no. 1172 (October 1994): 80-4.

Coldwell, Will. “Great Wall of China to Establish Graffiti Area for Tourists.” The Guardian, 4 March, 2020. https://www.theguardian.com/travel/2014/mar/04/great-wall-of-china-graffiti-area.

Image: https://www.discoverchina.com/article/reach-great-wall-china-from-shanghai

Kinloch Castle, Isle of Rum.

Written by: Mhairi Ferrier.

Majestic, intriguing, remarkable, captivating…

These are just some of the words that come to mind when describing the Isle of Rum, located in the Scottish Highlands. The largest of Scotland’s Small Isles, accessed by ferry from Mallaig, Rum is these days maintained by a combination of the Scottish Natural Heritage (SNH) and the Isle of Rum Community Trust. In 2009 and 2010 there was a transfer of land and assets in Kinloch Village and the surrounding area from the SNH to the Community Trust. This was a landmark change for the island and marked a new phase in its history. Kinloch Castle, located in Kinloch – the island’s main village, is still under the control of the SNH. 

             The Isle of Rum has a deeply rich history, spanning from the Ice Age to interactions with Vikings before falling victim to the Highland Clearances. A piece of this length could not begin to do justice to the comprehensive history of the island, although there are some points in this history which hold the key to the island’s economic future. At its height the Island community was nearly 450 inhabitants; this is, however, before the Highlands Clearances removed these people from their homes. Never again has the Island hosted such a population – today’s island community is made up of less than 30 people. Post-clearances, the Island passed through different owners before becoming the possession of the Bullough family.

            Between 1897 and 1900, Kinloch Castle was constructed on the Island as commissioned by George Bullough. Bullough had inherited a large sum of wealth and the island from his father, John who made his fortune in Lancashire’s textile industry. Extravagantly built, the castle cost £250,000 (today equivalent to millions of pounds) and was decorated in the Victorian fashions of the age. The Bulloughs hosted guests on the island, offering a wealth of activities within the castle and across the rest of the island. Everything from a spot of dancing in the ballroom, to a game of golf or tennis, or perhaps guests of this station would be more interesting in pursuing stalking on the island. Yet life on the island was a tale of two halves during the Bullough years, on one side the extravagant lifestyle of the Bulloughs and their guests, and on the other those working for the Bulloughs for whom island life was most likely a struggle. 

            The First World War put an end to this extravagance, with George Bullough gaining a military position and workers enlisting in the army. After the war drew to a close, less than a handful of the workers returned from the conflict and the Bullough family frequented their island paradise less.  There was little appetite for the lavish pastimes and dinner parties that had been the norm for the privileged during the Victorian period and the beginning of the Edwardian epoch. George Bullough’s death in 1939 led to the Castle being frequented even less; with his widow Lady Monica selling the Island (including the Castle) in 1957 for a sum of £23,000. Sir John Betjeman was largely correct when he predicted that:

In time to come the Castle will be a place of pilgrimage for all those who want to see how people lived in good King Edward’s days.

Kinloch Castle quickly became a staple for any tourist visiting the island and for many years hosted hostel accommodation and a pub. There were, and remain, regular tours for visitors to gain an understanding of the lifestyle led in stark contrast to the pursuits of an ordinary island man or woman. As most know, the upkeep of any historic building is costly, and this struggle led to the closure of the castle’s hostel and pub in 2015. The campaign to keep the castle in its prime continues and is supported by the Kinloch Castle Friends Association (KCFA).  

            KCFA have great plans going forward which include re-opening the hostel with brand new accommodation as well as restoring the museum rooms of the castle to their former glory. This would not only boost the amount of accommodation available for visitors on the island but also help boost the economy for locals. With new housing being built on the island, things would appear to be going from strength to strength. However, there is one snag in this plan: KCFA’s asset transfer bid was rejected by SNH who did not believe the association’s plans to be financially viable. With each delay such as this, the castle is deteriorating, further increasing the sum required to complete the restoration. 

Well what now? 

            KCFA are appealing for further financial support in order to make their plans a reality. Should further funding not materialise, demolition is a stark possibility that SNH are considering. While the demolition of historic buildings is not unusual, it would be a disaster in this case. Kinloch Castle is a unique part of Rum’s fascinating history; a building which tells the story of the decadence and wealth of the most privileged in the early 1900s, truly contrasting the life of the working-class islanders. The Castle is a true symbol of Highland history, it demonstrates what happened on many of the region’s estates after the Clearances. As such, the Castle must remain as reminder for all. Furthermore, the Castle will provide a vital element to the local economy should it remain. With repopulating of the vast Scottish Highlands now taking shape thanks to various initiatives, any available boosts to local economies are vital in order to make it a success. 

            This new decade has the possibility to be remembered as the one in which Kinloch Castle is demolished or it could be the one in which a revitalised and restored castle is able to make a real impact to the community. The combination of new housing, new opportunities under community ownership and a restored castle no longer controlled by SNH would be a monumental development for Rum. 

For now, we’ll have to wait and see.  

Image Source: http://www.isleofrum.com/isleofrumheritag.php

The Quagga and Colonialism

Written by: Lewis Twiby.

On 12 August 1883 the last known quagga died in captivity in Amsterdam Zoo; surveys could find no traces of quagga in the wild, confirming its extinction. Long thought to be a species of zebra, DNA tests in the 1980s found it to be a subspecies, once common across the plains of what would become South Africa. Unlike other infamous cases of animals being driven to extinction by human activity – most notably the moa of New Zealand and dodo of Mauritius – quagga had lived alongside humans for millennia. In fact, the name ‘quagga’ partially comes from the local Khoikhoi name. Instead the extinction of the quagga was deeply entwined with imperial culture and the formation of settler rule in South Africa.

From the early-1600s Dutch settlers created colonies on the southern coast of what would become South Africa. From 1795 the British took over the colony to secure shipping routes to India, and clashes began between the Dutch and British settlers. To avoid British rule the Dutch farmers began what has since been known as the ‘Great Trek’ after 1836; these ‘voortrekkers’ would later become a key part of Afrikaner national identity, especially as British rule tried to reassert itself over the voortrekkers. The white settlers claimed they were pushing into ‘free’ land where they could make a new start, however, this claim was at the expense of Africans. Although there was no intensive sedentary farming, that did not mean that the land was actually unclaimed. Various African peoples made claim to the lands, hosting a range of different states and economic structures ranging from pastoralists to small-scale farming to the expansionist Zulu Empire. These voortrekkers enslaved or displaced Africans from their land and helped destabilise the Zulu Empire to prevent them from being a threat.

The arrival of Europeans changed how the environment was treated. Although it is important not to fetishize pre-colonial land usage – wide-scale pastoralism had caused increased pressure on the land in Zulu and Xhosa communities – it is important to stress how land usage shifted dramatically. Just as in the American West, land areas of the southern African land were divided between individual farms (of varying sizes) which limited where wild animals could move. For herding animals, like the quagga, wide areas are needed so they have plenty of food to eat without destroying the local area – millions of zebra and wildebeest make the trek from the Serengeti in Tanzania to the Masai Mara in Kenya for this reason every year. Herds of quagga, therefore, tried to go on their regular grazing grounds but were faced with Boer farms. To prevent the quagga from competing with their own grazing herds, or from eating their own crops, farmers resorted to shooting stray herds of quagga. Quagga meat was also a good way to get quick food without killing off a possibly prized animal, and their skins could be sold for extra funds.

At the same time, the quagga became a prized animal for menageries back in the metropole. The quagga’s unique skin made it an interesting addition for any wealthy elite’s personal collection – Cusworth Hall in my own town of Doncaster even had quagga grazing on its grounds in the 1700s. When the first zoological gardens started emerging in the 1820s, such as London Zoo, quaggas were in high demand for their appearance and for colonial experiments. Naturalists hoped to breed quaggas with horses to create a new species that could be used in both Europe and Africa. There is also an underlying colonial ideology about why exotic animals were in demand for zoos and menageries. As argued by Harriet Ratvo, having a seemingly rare, unique, or exotic animal was part of a wider imperial power dynamic – if you could have an animal from a colonised region it showed by the power of empire and your own wealth. It showed Britain’s power to be able to move an animal across the world, and the owner’s importance by engaging in this power play.

However, many zoos were unequipped to look after exotic animals initially, and it was not uncommon for new animals to die within a year. London Zoo’s A.D. Bartlett, who oversaw the animal population during the late-nineteenth century, wrote how they had to invest a lot to look after elephants and rhinos because they were hard to get, but as monkeys were cheap to obtain, they did not have to worry in case they died. Initially the quagga was viewed this way. Their large herd sizes and apparent abundancy meant that they were seen as dispensable, but still sought after, animals. Furthermore, brutal capture and transport of animals meant that many more had to be caught than what was needed due to high mortality.

These factors mentioned are what drove the quagga to extinction. Demand to fill zoos in Europe, and policies of extermination to preserve farms in Africa, meant that quagga numbers quickly dwindled. As they were only found in southern Africa it meant that the population rapidly went extinct – although common, they were only common in one area. London Zoo’s single mare was photographed five times between the 1860s and 1870s, before she died, by the zoo’s chief photographer Frederick York. The rapid extinction of the quagga meant that these are the only photos of a living quagga. The last known wild quagga was shot in 1878, and when the last one died in captivity in 1883 the zoo requested hunters find another, not realising how quickly it had gone extinct. Albeit, locally all zebras were referred to as ‘quaggas’ which may have caused the confusion. Thanks to colonial settlement and exploitation the quagga had gone extinct.

Studying the quagga shows the various ways colonialism impacted colonised societies. Unfortunately, the quagga was not the only case of settler colonialism driving animals to extinction – passenger pigeons, thylacines, and almost the bison suffered the same fate. The quagga offers a warning for the future. Neo-colonialism means the natural world is being destroyed in order to fund the economies of the global north threatening both humans and nature. Colonialism very likely will drive orang-utans, macaws, and caimans, just to name a few, to extinction.

Image source: https://www.britannica.com/animal/quagga

A Letter To My Students

Written by: Dr Jake Blanc.

A letter to my students:

I do not want to be on strike. None of your lecturers do. We would rather be inside our classrooms giving a lecture, or in a seminar room discussing a reading, or holding office hours to talk through an essay assignment. And given that the outside temperatures have been hovering in the low single digits, coming out to the picket line every morning is far from an easy or cheerful decision.

But we cannot come back in, at least not yet. And please believe me here when I give the reason for why we have to stay outside a little longer. We are on strike for you, our students.

You probably hear that a lot around universities these days. Touch-screen panels in every classroom: for the students! A new survey every week: for the students! Two-for-one Dominos pizza: for the students!

But when I say that me and my colleagues are on strike for you—for the students—it reflects something much more important. Choosing to leave our classrooms, to forego our salary, and to hold up signs on a frozen sidewalk in your name, that is a deeply sincere statement.

Nobody goes into academia for fame or fortune. Unless you study celebrity culture or business history, you are unlikely to experience either or those two words in your daily academic life. Instead, the overwhelming majority of our time is spent thinking about, planning, and delivering pedagogy and mentorship to our students. And I would say that for almost every academic I know, that is precisely why we love our jobs.

But over the past many years (and decades!) universities have changed in ways that make it increasingly difficult, if not outright impossible, for us to give you the education you deserve. You likely have heard that our current strike has four core demands, relating to issues of casualisation, fair pay, equity, and pensions. Like any job that aims to be both part and a model of an inclusive society, ours relies on the foundation of steady employment, adequate compensation, equality amongst all employees, and the security of a dignified livelihood once we stop working. And each of the four relate to vital threads of what allows us to have the personal, financial, and mental wellbeing to come to work every day to help create the type of learning environment in which all members of a university community can thrive.

I won’t go into detail here on the four demands. That information is available elsewhere and, moreover, as a relative newcomer to the UK, I do not want to presume the cultural and institutional knowledge to properly talk through each item. (Though let’s not kid ourselves, our struggle here in Britain is part of the same struggle I would have faced if I had stayed in the U.S. or gone to teach anywhere else in the world).

Instead, I want to reiterate that I see you, that we see you. All of us, your lecturers, your tutors, your supervisors, your support staff, everyone. We all see you. We know that our decision to strike makes you stressed and worried. We know that our choice to keep you from your usual class routine makes you nervous about essays and exams. I’m sure it might even feel like we’re doing this in spite of you—or even worse, against you. Nothing could be further from the truth.

We’re doing this because we are frustrated, and tired, and overworked, and to be honest, pissed off. We are angry that the university has let our conditions, and our workloads, and our hiring practices degrade to such a point that we have to abandon our classrooms just to have our demands be taken seriously. A strike is not a strategy to be used lightly, it is a last-resort, break-glass-in-case-of-emergency type of option. And we are currently in that sort of moment.

Personally, I am three years into what I can hope will be a long career. I’d love nothing more than to devote my professional life to working with several generations of students, where my history courses can serve as a platform for students to make sense of the past, to learn to think critically, to write well, and to engage one another with empathy. If I’m lucky enough, many of you might even follow suit and become my colleagues one day, and then you’ll get to share in the joys of what, when supported properly, is the best job in the world.

But those hopes are contingent on something changing. And for us, that something can only come about by going on strike. We’ve exhausted all other options. Believe me, we don’t want to strike. But we care too much about doing our job well, and we care too much about you and your future, to not see this through.

So thank you for your support. And if not your support, then hopefully at least your trust that when we say we’re doing this for our students, we mean it.

Dr Jake Blanc

Lecturer in Latin American History

Flora MacDonald – Heroine or Traitor?

Written by: Isabelle Sher.

On the 16th of April 1746, the Jacobite rebels were defeated at Culloden by Government troops under the command of William Augustus, Duke of Cumberland. Following the catastrophic defeat of Charles Stuart (better known as ‘Bonnie Prince Charlie’), those who remained loyal to the Prince’s cause sought to help him negotiate a means of passage to France, all the time braving the possibility of discovery of his escape by the ‘Redcoats’. In June 1746, Charles and his few remaining loyal supporters arrived at Benbecula, where they enlisted the help of Flora MacDonald. 

Time does little to alter the vivid images of those two short weeks. I have nothing else to occupy me here. The heat in London rages on; the stench of death and fear overpowering. I pray that he, at least, is safe; to regret my own actions is a lost cause. With each fresh sensation of regret I intend to displace it by a determination to rid from my mind what has passed, and what cannot be altered.

I ought not complain, I repeatedly tell myself. I have not been imprisoned in The Tower but placed under the close watch of one of the King’s own messengers. I think of my dear brother Angus and the sheep, the little white specks just visible in the distance of the never-ending twilight of June in Benbecula. I close my eyes and dream of peace. I wonder at the hearts of those Government soldiers, as they rampage across the land like savages; burning all they find. I think back to that fateful night when Hugh O’Neill and Charles Stuart arrived to speak with me. I imagine one could never forget a sight such as the one summoned to stand before me. The prince reeked overpoweringly of alcohol; an abhorrence I could forgive under the circumstances. His deathly pallor contrasted unnaturally with his peeling sunburnt skin. Upon his head was his filthy, louse-ridden periwig, Lord knows why. The men’s voices rasped in a silence disturbed only by gently breaking waves and the odd sheep. They had little regard for how my assistance might ruin the reputation of my chieftain Sir Alexander MacDonald. That wretched O’Neill. I cannot fault my dealings with him, sharp as they were, for inherent in his manner was a detestable cheek. I repeatedly asserted that I would have no part in assisting the Prince and yet against my better judgement I allowed my mind to be turned. My family are no sympathisers to Charles’s cause and were it not for his pitiful condition, and the merciless way in which the Government forces treated our Highlanders, I am certain that even fewer would have come to his aid.

Even then the wretch hardly helped himself; dressed as my maidservant in women’s garb, he did nothing to keep up appearances, though doubtless his drunkenness and pain from the scurvy did little to help. Though I did not think much of it at the time, it is curious to think that the rightful heir to the throne was acting as my own named maidservant, Betty Burke.

I shift uneasily in my cramped confinement. The sun burns brighter still, hot on my face. I long for a breeze. I can hear the men laughing. They can make merry. I can only be melancholy.

I was aghast at the conduct of the Prince, who behaved in such a manner as to arouse the suspicion of everyone on the island following our crossing to Skye. I suspect he was too proud to play the part of a woman, for he would insist on being armed, despite mine and the party’s protestations that a thorough search by Government forces would give him away.

My arrest came several days after the prince and I parted ways. God willing, my life will be spared. From Fort Augustus I was moved to Edinburgh Castle, and then to London. I pray that nothing more terrifying befalls me again, for the unfairness of what has become of me is, I believe, undeserved. I only did as I believe any kind-hearted soul would have done when confronted by so pitiful a man.

The summers are too warm here. I long for the roaring winds and treacherous seas of my homeland. There at least I can witness the liveliness of the world. If it were not for the sheer number of people residing in and amongst this city, London would stand quite still. They say it will begin to cool in a month, for it is nearing September. I hope that they are right.

I wish the prince Godspeed, I tell myself, but I cannot ever wish for his return. 

Image: The Field of Prestonpans, Coloured lithograph by Mouilleron after Sir William Allan, printed by Lemercier, published by E Gambert and Company, 25 Berners Street, London, 1 September 1852. https://collection.nam.ac.uk/detail.php?acc=1971-02-33-303-1

The Ideological Barriers faced by Renaissance Women Humanists

Written by: Joshua Al-Najar.

On a preliminary reading, humanism appears to be wrought with misogynistic tendencies, providing little space for women’s engagement. Joan Kelly-Gadol points to male humanists such as Juan Luis Vives, whose misogynistic writings were informed by Aristotelian biology and the hyper-masculine nature of classical humanism. Women’s apparent biological, religious and historical inferiority inferred that ‘few see her, and none at all hear her.’ Thus, Kelly-Gadol ponders whether the presence of such exclusionary thought renders the term ‘renaissance’ incompatible with the female experience.

Despite these castigations, women humanists contributed considerably to the Querelle des Femmes, an academic movement which sought to define women’s abilities, capacities and function within society. Though scholars such as Kelly have imagined the renaissance as a masculine endeavour, W. Caffero refers to the Querelle as the ‘aspect of women’s experience that fits most comfortably under the “Renaissance” label’. In entering the debate, women utilised the same general humanist framework that their male counterparts had developed, by using themes such as antiquity, religion and value of education. For some, this successfully defended and enhanced women’s status, though contradictions could arise from using a male-oriented paradigm. 

The Querelle is thought to have emerged in earnest with the formative works of Christine de Pizan (1364-c.1430), a Venetian author who lived in France. De Pizan began writing in response to the deeply misogynistic academic climate of her contemporaries, that advocated women’s inferiority. She wrote:

Judging from the treatises of all the philosophers and poets and from all orators […] it seems that they speak from one and the same mouth […] that the behaviour of women is inclined to and full of every vice.

De Pizan contended with the prevailing view of women in her work, Le Livre de la cité des dames (c. 1405). Here, she was given the intellectual space to promote women’s learning and qualities; she creates three allegorical, female figures representing reason, rectitude and justice. They command the creation of a separate city for women, drawing inspiration from the mythological Amazon warriors. Furthermore, she launches into an attack on the sexist views of ancient philosophers, whose writings legitimised her sexist contemporaries. De Pizan’s use of myth and criticism of ancient scholars demonstrated her classical learning, and serves as a rebuff to the idea that women’s education was for the purpose of promoting the chaste, Christian ideal. Her work built upon earlier pieces, such as Boccaccio’s De Mulierbus Claris (1362), but their approaches are distinct: where men, such as Boccaccio, praised women who had overcome their feminine traits, Pizan lauded women on their own terms. In addition, many of the male humanists who highlighted the virtuous nature of some women, did so under the patronage of wealthy, powerful women – Boccacio dedicated his work to Andrea Acciaioli, Countess of Altavilla.

Reference to classical antiquity was a common aspect of male humanism, but it could be deployed to aggrandise women’s status too. This is apparent in the letters of Laura Cereta (1469-1499), of which some eighty are known. In one such example, Cereta addresses ‘Bibolo’ – a play on words, linking men to drunkenness – and uses classical examples of women’s achievement to present the worthiness of their education. She writes of Zenobia’s mastery of Greek; Subbu’s triumph over Solomon; etc. Cereta’s skilled deployment of these classical examples reinforce her own learnedness and justifies her belief education was crucial for ‘all human beings equally.’ She re-works antiquity’s legacy – which had so often been the source of renaissance scholars’ misogyny – into something which can extol the mental capabilities of women. Cereta wrote at a time when women were institutionally barred from higher learning, and even from certain administrative buildings, such as the Florentine Podesta. She writes of herself as a ‘Medusa, who will not be blinded by a few drops of olive oil.’ Therefore, her letters serve as a powerful reminder that women’s intellectual output will weather male anxiety.

At times, women humanists engaged in open critique of such anxieties. The Venetian Lucrezia Marinella’s The Nobility and Excellence of Women and the Defects and Vices of Men (1601) arose in direct response to the deeply misogynistic Dei Donneschi Diffeti (1599) from Guiseppe Passi. Passi’s work derived much of its basis from the Aristotelian concept of the ‘imperfect female’ and the inferiority of Eve to Adam. Marinella lambasts her opponent’s views, dividing her work into two parts: the first extolls the virtues of women, whilst the second lambasts the defaults of men. She cleverly warps many of Passi’s arguments to her own advantage. In one example, she reverses Passi’s critique of women’s beauty and vanity, by utilising neo-Platonic theory to explain ‘what is beautiful outwardly is beautiful inwardly.’ In another, she rejects Aristotle’s theory of female imperfection, by claiming that women are perfect realisations of God’s creation. By arguing in such a manner, Marinella is able to use traditionally masculine arguments to advance women’s case. She not only advocates equality, but asserts women superiority to men, by boldly claiming:

If women […] wake themselves from the long sleep that oppresses them, how meek and humble will those proud men become.

However, in later life, Marinella would come to contradict her earlier ferocity. In the Essirtazioni alle donne e agli altri (1622), Bronzino recorded that Marinella’s religious devotion had strengthened with her age, and that she had renounced much of her earlier writings. Instead, she considered a life of piety and domestic dedication as the pinnacle of womanhood.

Integrating religion could be an issue for women humanists who sought to strengthen their standing in society. The ideal Christian woman was thought to be moral, chaste and somewhat submissive to her husband – her inferior place having been earned by Eve’s folly. In Of the Equal or Unequal Sins of Adam and Eve (1451-3), Isotta Nogarola sparks a literary debate with Ludovico Foscarini, where she attempts to justify Eve’s fault. Somewhat ironically Nogarola defends Eve – and thus, womankind – by conforming to societal understanding of women’s inferiority. She argues that Eve’s apparent weakness was the cause of her error, as oppose to a pride-based motivation, which could be considered guiltier. She also reinforces the imperfection of women by claiming that Adam, as the fully realised man, should have exerted greater control over Eve’s deficiencies. Though Nogarola presents great skill in her argument, her loss is inevitable due to the fact she argues within parameters which consider Eve (and all women) as untrustworthy and weak. Nograola herself was a devout Christian, and was reportedly celibate until her death in 1466. Her compliance with the prevailing, Christian attitude to women somewhat hinders her argument. 

Historiographically, there is some debate. The aforementioned women humanists clearly provide something of an obstacle for theory put forward by Kelly-Gadol. Women’s efforts at defending and enhancing their status display clear engagement with general ‘renaissance’ themes. However, she clearly identifies the adverse effect that some aspects of humanism had on women’s standing. This sentiment has, at times, gone unconsidered by scholars such as Jacob Burckhardt. Burckhardt (1860), mistakenly supported the idea the so-called ‘renaissance individualism had led to “both sexes existing on an equal footing”’.

The reality appears to be somewhere in-between. When defending or enhancing their status, women humanists utilised many of the same arguments as their male counterparts: religion, antiquity and the virtuous nature of education. However, at times, the usage of a male-oriented structure forced contradictions of their argument.

Bibliography

Caferro, William., and Wiley InterScience. Contesting the Renaissance. Contesting the past. Malden, MA: Wiley-Blackwell, 2011. 

Laura Cereta, Two ‘Familiar’ Letters, in K. Gouwens (ed.), The Italian Renaissance: The Essential Sources (2004).

Crum, Roger J., and John T. Paoletti. Renaissance Florence: A Social History. Cambridge: Cambridge University Press, 2006.

Kelly, J. “Did Women Have a Renaissance?” in Renate Bridenthal and Claudia Koonz (eds.) Becoming Visible. Women in European History (1977) Boston 137-165.

Panizza, L. “Introduction to the Translation” in Lucrezia Marinella, The Nobility and Excellence of Women, 1-34.

Elissa B. Weaver “Gender” Chapter 11 of A Companion to the Worlds of the Renaissance, edited by Guido Ruggiero, 2007, Blackwell Publishing Ltd. 

Image Source: https://www.bbc.co.uk/programmes/b08sksb4

Review: A Tale of Two Cities by Jesse Hoffnung-Garskof

Written by: Lewis Twiby.

New York City remains one of the most culturally diverse cities in the United States having seen immigration from across the world for centuries. One of the many communities to call New York home is the Dominican community which Jesse Hoffnung-Garskof looks at in his 2008 book A Tale of Two Cities: Santo Domingo and New York after 1950. Hoffnung-Garskof offers an interesting insight into how diasporas and culture are formed. He is also keen to stress that diasporas do not exist in a vacuum – they interact with both the ‘homeland’ and other diasporas.

As expected, Hoffnung-Garskof begins his book in the capital of the Dominican Republic – Santo Domingo. Here he explores the twin ideas which would shape Dominican history: progreso and cultura. Progreso, the idea that Dominicans were moving to an improved life, and cultura, the concept that Dominicans had to exhibit certain cultural tropes to achieve progreso, would shape both Santo Domingo and New York. A recurrent theme throughout the book is how progreso and cultura evolved in the context of migration. Rural Dominicans saw Santo Domingo as being one of the most important places contributing to cultura, but New York was seen as the pinnacle of cultura. These ideas were also in flux thanks to the turbulent politics of the republic – the genocidal rule of Rafael Trujillo lasted until his assassination in 1961, followed by the dictatorship of Joaquin Balaguer, US occupation, and a turbulent revolution. In Santo Domingo, Hoffnung-Garskof, relies heavily on oral testimony: emerging barrios (which became shantytowns) saw an explosion of grassroots culture and political activism giving ample opportunity to hear subaltern voices. For example, Hoffnung-Garskof shows how cultura was seen as being Catholic, speaking Spanish, and, unfortunately, racialised against Haitians where those in the barrios turned cultura on its head. Political radicals would have their meetings at church services, and young men would play loud music in Spanish as a way to rebel without being attacked by the police.

Moving away from Santo Domingo, Hoffnung-Garskof then takes us to Washington Heights, Manhattan where the Dominican diaspora emerged. Originally, the diaspora was made up of radicals exiled by either Trujillo or Balaguer, but as air costs became cheaper, more and more Dominicans moved to the land of ‘progeso y cultura.’ In what is perhaps the most interesting section of the book Hoffnung-Garskof looks at how the newly arrived Dominicans became racialised in Manhattan. These Dominicans were from a middle-class background back in the Dominican Republic but found themselves in a working-class situation; this caused a paradoxical situation when returning home to visit family members. Dominicans would engage in American consumerism which their family would take as signs of wealth, but domínicanes de Nueva York had to try to explain that they were not wealthy. Meanwhile, they were forced into the racialised world of American society. For generations, Dominicans had considered themselves ‘white’ against ‘black’ Haitians, which caused Trujillo to massacre thousands of Haitians to ‘whiten’ the country, but they were not seen this way in Washington Heights. The area had large Irish, Jewish, African-American, and Puerto Rican communities, so Dominicans were forced to reinvent their identity based on the ever-changing categories of class, race, and culture in Manhattan. Hoffnung-Garskof effectively shows this with his wide range of oral testimony from various community members in Manhattan – easily the strongest aspect of the book is his ample usage of first-hand testimony. However, he could have expanded Manhattan’s history of immigration here a lot more. Jewish and Irish communities are mentioned, but are somewhat overlooked, and the city’s vibrant East Asian, Cuban, Arabic, South Asian, and African diasporas are entirely ignored. It would have been interesting to see how they factored into the Dominican experience in shaping their identity.

Hoffnung-Garskof in the early 1990s worked as a social worker for Dominican families in the Washington Heights schools, and his lengthy discussion of diasporas in schools is his most detailed section. Again, using interviews he manages to recreate, in detail, the various lives of Dominican students, and how they forged their own lives. We see some using their wealth to become doctors, others joining African-American rights groups like Umoja to fight for rights, or clash with African-Americans and Puerto Ricans over racial animosities. Reading it you can tell this has been a passion of his for a long time, and how deeply he cares about the community. This is especially visible when he discusses the crack epidemic of the 1990s – Washington Heights became synonymous with drug crime in the US media. He rebukes many of the common motives associated with Dominicans during the time, showing it was a crisis of capital rather than a moral failing. My favourite point was his criticism of leading attorney, and later New York mayor and Donald Trump’s personal lawyer, Rudy Giuliani for targeting Dominican youths in his exposé on crack, but entirely ignoring the crack epidemic of the Wall Street elite. However, as Hoffnung-Garskof is so invested in the lives of the people of Washington Heights, it does break the flow of the entire narrative. He was so eager to show us the entirety of Washington Heights that we read biography after biography in just two chapters making it, at times, hard to read. If anything, and hopefully he might do this in the future, these narratives could become their very own piece of historical writing.

Finally, I just want to quickly discuss how Hoffnung-Garskof links diasporas to the ‘homeland.’ As mentioned earlier, the diaspora was not cut-off from the Dominican Republic – ranging from family visits ‘home’ at Christmas to exiled leftists waiting for the fall of the US-backed regime. Here the twin ideas of cultura and progreso come into play. On the one hand, the New York based community were seen with a sense of pride back in Santo Domingo. The regular Dominican Day parades, growing affluence of the community, and even Dominicans partaking in beauty pageants were viewed as Dominicans achieving progreso – they had become the immigrant community to be emulated. However, they were simultaneously degraded as going against cultura. Women going out of the home, children engaging in American consumerism, and the adoption of American fashions were viewed as Dominicans becoming too Americanised. Domínicannewyork was invented to lambast a diaspora deemed too American. Nevertheless, American-based Dominicans still viewed themselves as ‘Dominican’ and not ‘Dominican-American.’ Newspapers like Ahora! reported on events in both New York and Santo Domingo, and the right to vote in Dominican elections was eventually granted to the diaspora. Hoffnung-Garskof ensures that the themes of cultura and progreso are never forgotten in the narrative.

For anyone interested in the histories of immigration, the formation of identity, and diasporas then A Tale of Two Cities is a must read. Almost, at times, needing a smoother narrative, Hoffnung-Garskof’s investment in the diaspora makes it an engaging read, and the abundancy of oral testimony turns the names on the pages into living, breathing people. He has recently released a book about Cubans and Puerto Ricans in New York, so hopefully we can see more of his writing soon.

Total Military Politics: The Rise of Japanese Fascism

Written by: Jack Bennett.

The execution of Kita Ikki, the so-called ‘Father of Japanese Fascism,’ in 1937, whose ‘Outline Plan for the Revolution of Japan’ (1919) emphasised principles of nationalistic socialism, reveals the violent descent of Japan into a totalised political and economic system of control from 1925 through to 1945. 

Rising ultranationalism, militarism, and state capitalism under the early reign of the Showa Emperor Hirohito, defined Japanese politics and society as ‘statist’ from the 1920s through to the 1940s. The reverberations of global events and shifting economic and political dynamics during the 1920s and 1930s directly influenced the domestic character of Japan. 1920s Japan was a decade of transition and contradiction, with limited democracy and freedom. Despite the General Election Law in 1925 introducing universal suffrage for all men, in the same year the Peace Preservation Law simultaneously introduced strict parameters on political speech and behaviour, enforced by the Special Higher Police, who rooted out communists and later anti-war elements throughout Japan. 

Following the 1929 Wall Street Crash and the onset of the global depression, Japan slumped into the Showa Depression period from 1930-32, with WPI (Wholesale Price Index) falling by 30 per cent, agricultural prices by 40 per cent, and textile prices by nearly 50 per cent. From 1931, rural impoverishment became manifestly severe and widespread, exploitative regimes of landlordism throughout Japan resulted in the 1934 Tohoku famine. The response to which was a rural radicalisation in pursuit of economic justice, with popular criticism of government and industry. This economic and social dissolution produced an upsurge in military and right-wing political power through the public disillusionment towards traditional political governments. Consequently, developing into a period known as the ‘Politics of Assassination.’ Most notably, the May 15 Incident of 1932, in which young naval officers assassinated Prime Minister Inukai Tsuyoshi, as well as the later February 26 Incident in 1936 saw a failed coup by the Japanese Imperial Army towards the government of Prime Minister Keisuke Okada. In the aftermath of these events, military influence over the civilian government grew significantly. This allowed for the transition away from economic liberalism and towards greater state economic control and management. These increasingly fascist modes of organisation and thought during the 1930s, emphasised the virtues of cooperation, and the suppression of individual needs or wants to further the goals of the collective, in particular, economic development and industrialisation within all areas of Japanese society. 

Consequently, greater totalitarian, militarist, and aggressive foreign expansionist ideals were espoused by the imperial politico-military leadership of Japan. From the 1920s onwards, the Japanese were becoming increasingly entrenched on mainland Asia, through the control of former Russian railroads in Manchuria and Outer Mongolia. This was economically vital and underpinned Japan’s strategic influence in northern China. Through a constant Japanese military presence in Chinese lands and a weapons trade with local warlords, the Japanese functioned as an imperial power in the region. However, the Chinese nationalist threat continued to expand during this period, weakening Japan’s influence over the Manchurian warlords. This resulted in the Jinan Incident of 1928, between the Northern Expedition and the Imperial Japanese Army, underscoring Chiang Kai Shek’s proclamation of Japan as the main threat to China at this time. Therefore, within the political-military dual government of Japan there developed vehement criticism of the ‘Shidehara Diplomacy’ for being too soft on China amidst an expanding sphere of Japanese imperial assimilation. 

The culmination of these rising military tensions was the 1931 Manchurian Incident and the formation of the Japanese-controlled satellite state of Manchukuo. It was motivated by total war economics and forward national defence of the Japanese archipelago. In response, the Chinese military attempted to utilise international law, upheld by the League of Nations, contributing to the Lytton Committee investigation of 1932, which concluded Japan’s invasion was unjustified. In response, Japan left the League of Nations, effectively became a ‘rogue state,’ in opposition to Anglo-American interests.  This proved a major turning point in the progression of Japan into a fully-fledged fascist state. With the ramping-up of military engagements in China in the same year and with an unsuccessful invasion of Shanghai, forcing the Tanggu Ceasefire the following year. Then, in 1936, Japan signed the Anti-Comintern Pact, providing an early link with Nazi Germany and the fulfilment of globally connected fascist ideological frameworks. 

Mirroring these increasingly aggressive and totalitarian expansionist foreign policy objectives, through the Second Sino-Japanese war of 1937-41, a fascist economic model was introduced in Japan with total military mobilisation under the National Mobilisation Law of 1938, allowing for control of the zaibatsu to fuel the ever-growing war machine of Japan. This included extreme direct investment in the new state of Manchukuo, with the construction of new cities, mines, and railroads. Interestingly, these autocratic socio-economic modes of development were both imported and exported between Japan and mainland Asia during this period. Alongside, the development of both a Central Price Committee and Central Planning Board directed national and private enterprise energies towards the enemy. Then, in September 1940, Japan signed the Tripartite Pact with the Axis powers of Nazi Germany and Italy, simultaneously establishing closer connections with Germany, in order to separate them from assisting China, preventing Soviet expansion Eastwards, and dissuading the United States from entering the war. 

Under Prime Minister Kanoe Fumimaro, between June 1937 and January 1939 and again from July 1940 to October 1941, there was an increased focus on the mollification of military men, alongside the expansion of Japanese influence across Asia and preparation for the future race war. This is elucidated through the creation, in August 1940, of the Greater East Asia Co-Prosperity Sphere by Japan, emphasising Japan’s imperial unification and pursuit of Pan-Asianism. While, political totalitarian control was extended into the depths of Japanese society. Crucially, Japan became a single-party state following the dissolution of political parties and the creation of the Imperial Rule Assistance Association (IRAA). Numerous state-controlled social organisations were created, including: the Neighbourhood Association, to police people’s behaviour on a local community level; the Great Japanese Women’s Association, which cooperated with the IRAA to provide grass-roots, family-oriented management; as well as the Youth Association, controlling education and military development. Between December 1937 and February 1938, the Imperial Japanese government violently suppressed subversive members of society, in what became known as the Popular Front Incident. Additionally, in July 1940, the politician Saito Takao was evicted from the Diet due to this criticism of the Japanese war effort. Thus, through authoritarian methods of manipulation, violence and control, the Japanese state became increasingly invasive throughout the late 1930s and increasingly more-so during World War II. 

It is intriguing to compare the fascism of Japan with that of its Nazi German and Italian counterparts during this period. Distinctly, Japan did not experience a mass movement or cult of the supreme leader, but instead a heavy stress on agrarianism and a central role for military officers in national control. Further to this, a high degree of consistency, through the political culture, was tacitly assumed. However, despite the extremely homogeneous nature of Japanese society, there were wide variations in values and behaviour, founded in geographical or class differences. Ultimately, from initially balancing limited democratic values with elements of control and regulation in the 1920s, to external expansion throughout the 1930s, and the protracted war across the Asia-Pacifc region until 1945, Japan progressively developed ever-greater fascist modes of social, economic, and military state-controlled, authoritarianism throughout this period. 

Bibliography

Berger, Gordon M, “Politics and Mobilization in Japan, 1931–1945,” in The Cambridge History of Japan, ed. Peter Duus (Cambridge, 1989).

Duus, Peter; Okimoti, Daniel I., “Fascism and the History of Pre-War Japan: The Failure of a Concept,” The Journal of Asian Studies, 1 (1979): 65-76.

McCormack, Gavan. “1930s JAPAN: Fascist?” Social Analysis: The International Journal of Social and Cultural Practice, 5/6 (1980): 125-43.

Tansman, Alan, The Culture of Japanese Fascism, (Durham, 2009). 

Yoshimi, Yoshiaki, Grassroots Fascism: The War Experience of the Japanese People, (New York, 2015).

 Image source: https://theuniversalspectator.wordpress.com/2016/07/16/fascism-whats-old-will-be-new-again-in-japan/

The Arnolfini Portrait and the Limits of Interpretation

Written by: Tristan Craig.

Hung in the fifteenth-century Netherlandish painting room of the National Gallery, Jan van Eyck’s 1434 Arnolfini Portrait has been a source of intrigue, mystery and vastly differing readings since its purchase by the gallery in 1842. Measuring just under one metre in height, this oil panel – commonly understood to be a member of the prominent Italian Arnolfini family and his wife – is replete with detail. Such a wealth of imagery has invited a great deal of scholarly debate concerning how to interpret the artwork however, when even the identity of the painting’s subjects cannot be confirmed with absolute certainty, uncovering the true intention of van Eyck’s masterpiece is no small task.

A pivotal work in the study of the painting is a 1934 thesis published in the Burlington Magazine by art historian Erwin Panofsky. An expert in analysing iconography and symbolism in art of the Northern Renaissance, the impetus for Panofsky’s reading was an earlier theory presented by Louis Dimier who believed the couple to be Jan van Eyck and his wife. Panofsky disagreed with this interpretation of ‘Johannes de Eyck fuit hic’ (an inscription on the wall behind the couple which translates as ‘Jan van Eyck was here’), believing it to signify that the artist was present as a witness rather than a bridegroom. However, in the inventories of Margaret of Austria in what Panofsky termed the “orthodox theory”, the male figure in the panel was declared as one ‘Arnoult fin’; adopting this stance, Panofsky asserted that the couple must therefore be Giovanni Arnolfini and his wife, Jeanne de Cename. He then argued, with convincing reasoning, that it not only represented a nuptial scene but served as a pictorial wedding contract, citing that the clandestine nature of the wedding would necessitate such a unique testimony from van Eyck.

Several counter arguments were presented to Panofsky’s theory in subsequent years. In 1994, art historian Edwin Hall published The Arnolfini Betrothal in which he argued that the painting was not a wedding scene, but rather commemorated an engagement. However, both theories rested on the supposition that the gentleman in the painting was the same man. Unchallenged for decades, later findings would expose a tremendous flaw in this assumption, shattering the foundations upon which both theories rested. Whilst Panofsky refers in his text simply to a ‘Giovanni Arnolfini,’ the name was shared by two members of the family, both of whom lived in Bruges when van Eyck was active: Giovanni de Arrigo Arnolfini – who was married to Jeanne de Cename and who Panofsky believes are the couple depicted – and his cousin, Giovanni di Nicolao Arnolfini. The discovery of another inventory would be his undoing; ducal accounts confirmed that the former Arnolfini cousin and Jeanne were not wed until 1447, 13 years after the ‘matrimonial’ artwork was completed. If it was Giovanni de Arrigo Arnolfini, it could not be a nuptial scene. Lorne Campbell, former Beaumont Senior Research Curator at the National Gallery and responsible for the gallery’s catalogue of Fifteenth Century Netherlandish Paintings, claimed that the couple were di Nicolao Arnolfini and his wife, Costanza Trenta. But again, a critical problem arose with this attribution – Constanza Trenta died in 1433. It could be that, as Lorne Campbell suggests, di Nicolao was wed for a second time very soon after the death of this first wife, but no evidence has been uncovered to support this notion.

However, whilst the majority of scholarship supports the concept that the painting depicts a conjugal scene – which throws into serious question the identity of the couple – it may convey a dramatically different tale. The majority of Panofsky’s argument centres on his interpretation of the many objects interspersed throughout the scene, such as the solitary candle burning in a chandelier directly above the male Arnolfini’s head. For Panofsky, it can have only one meaning; not serving any practical purposes of illumination, he states emphatically that it relates to the matrimonial oath. But in fifteenth-century artistic convention, the burning candle served other purposes of a much more sombre ilk. Presenting her own findings in 2003, art historian, Margaret Koster introduces the burning candle as a symbol of life, but that the extinguished candle over Constanza’s head may in actuality signify her death. The symbolic counter-arguments do not end there: the small dog at the feet of the couple, which for Panofsky represents marital fidelity, was also a common trope in female tomb effigies as they were believed to accompany them into the afterlife. Incidentally, Panofsky notes the similarities in the stance of the couple to Roman sarcophagi but cites this reference as little more than a possible ‘influence.’ The mirror hung on the wall above van Eyck’s signature too has prompted speculation; whilst Panofsky does not probe its inclusion beyond reflecting the couple, Koster states that it’s a device often used in vanitas paintings. Like the memento mori, such compositions denote the fragility of life and inevitability of death. In such paintings, mirrors represent both vanity and truth, and its inclusion in a convex form within the Arnolfini Portrait may signify a distorted perception of the world; the reality of the married couple portrayed in actuality being a melancholic figment of imagination.

It also ought to be stated that what Panofsky considers deliberate or ‘disguised symbolism’ purposely included by Jan van Eyck may in fact be incidental. He goes into great deal in elaborating the purpose of the small dog at the feet of the Arnolfini’s as a representation of faith; whilst it may have a more sorrowful purpose, it may also simply be a beloved lapdog, another common feature in artwork of the period. Indeed, the marital vow that Panofsky attests the couple are taking based on their hand gestures (note that the examples he uses show the joining of the couples right hands; in the Arnolfini Portrait, his left hand takes her right which in itself has prompted problematic interpretation) could signify a plethora of oaths. With the crux of his argument resting on an understanding that has since proven false, Panofsky’s translation of what he considers to be deliberate symbols quickly begins to unravel.

With such conflicting scholarship, it seems the true ‘meaning’ of the Arnolfini portrait may never be indubitably uncovered. What Edwin Panofsky and the scholars that both proceeded and followed him show is how interpretation is severely limited by how much it can be substantiated. Despite plausible theories and cogent argumentation, Panofsky’s thesis unfortunately fell foul to the passing of time and subsequent findings. That is not to say that his thesis ought to be completely disregarded, rather his understanding of iconography and Netherlandish painting provide an interesting insight into a masterpiece. The mystery of the Arnolfini Portrait, however, continues.

BIBLIOGRAPHY

Hall, Edwin, The Arnolfini Betrothal: Medieval Marriage and the Enigma of Van Eyck’s Double Portrait, (London, 1994).

Hicks, Carola, The Girl in a Green Gown: The History and Mystery of the Arnolfini Portrait, (London, 2012).

Janson, Anthony F., “The Convex Mirror as Vanitas Symbol,” Source: Notes in the History of Art, 2/3 (1985): 51-54.

Koster, Margaret L., “The Arnolfini Double Portrait: A Simple Solution,” Apollo, 499 (2003): 3-14.Panofsky, Erwin, “Jan van Eyck’s Arnolfini Portrait,” The Burlington Magazine for Connoisseurs, 372 (1934): 117-127.

Image: The Arnolfini Portrait by Jan van Eyck, 1434. Photograph: Thames & Hudson