Sunday, 31 August 2014

Personal Statement

A/N: Personal statement written for my UCAS application way back in 2006.

My Personal Statement 2006

History has always fascinated me, ever since I was first able to grasp the concept of a world before I was a part of it. The study of history has enabled me to make better sense of the world around me: understanding the roots of current practices and beliefs helps when passing judgements on their continued relevance. For this reason I am keen to continue studying history. The past constantly impacts on the present and the future, being able to interpret past events can change the future as our preconceptions are questioned and, ultimately, changed as new evidence comes to light and existing evidence is interpretated in new ways. I believe that such an approach is a firm foundation for good citizenship.     

I have enjoyed my A-level history course, in particular the way in which it interlinked with my other courses. For example the reshaping of the British political system in the 1830s, which we studied in British history helped make clear the workings of the 21st century parliament we were studying in politics. Another example of this overlap is how sociology helped me to comprehend how Russian serfs could accept their position in society whilst studying Tsars Alexander II and Nicholas II. This has made me realise that a multi-disciplinary approach is most beneficial when studying history. I look forward to being able to apply skills learnt throughout my school career at university.     My current courses have prepared me well for university, amongst other things helping me to become accustomed to carrying out research independently, as well as working as part of a group for presentations. A summer school I attended at Bristol University further reassured me that university life would suit me and that a relatively broad course such as modern history would provide an intellectual challenge as well as offering a great deal of variety.     

Outside school I enjoy attending historical lectures and events within the local community. My particular historical passion is for the history of fashion and the changing social attitudes that surround this topic. I especially enjoy visiting exhibitions and re-enactments to see the fashions up close, for example I recently enjoyed a day out at the Llandrindod Wells Victorian festival where many people were dressed in period costume. In addition I have long been interested in WW2, in particular the British home front and the German youth movements. I found the book ‘Hitler’s Children’ by Guido Knopp very enlightening for it’s information on peoples happy memories of the Hitler Youth and the BDM, as too often the focus is on the minority who fought against the movements.     

I am an avid reader, on topics historical and otherwise, and so relished the opportunity to get involved with the ‘Buddy Reading’ scheme, this involved working closely with a younger pupil to improve their reading skills. In addition work placements at a school and a nursery have convinced me that after completing my degree I want to enter the teaching profession.     

My other interests include languages, something that I’m hoping to continue with at university, music, miniatures, folklore and socialising with friends. In my spare time I also have a part-time job at a local hospital, which has greatly increased my confidence and improved my communication skills.     

In conclusion I am looking forward to having the opportunity to widen my historical knowledge, improve my interpretative skills and to getting the most out of my time at university.

As examples of my school level essays, check out my coursework on the Great Reform Act of 1832, and a piece on the History of the Hitler Youth. Or, there are my essays from my time at Cambridge University.

Saturday, 30 August 2014

Middle Class Philanthropy

What was the social function of philanthropy for the upper and middle classes in nineteenth-century Britain?

Philanthropy can be defined as the act of giving to improve human welfare, or simply as a ‘love of mankind’, a phrase which sounds suitably worthy. Yet, in contrast to contemporary opinion like that of Lecky, who claimed that the anti-slavery campaign was ‘perfectly virtuous’, few today would credit nineteenth-century philanthropists with purely altruistic motives. Modern historiography favours more pragmatic explanations for the rise of philanthropy; a means of advertising wealth and status, a way to assuage the collective conscience of the prosperous classes in the face of the desperate poverty of their countrymen, a means of social control, and so on. Yet were nineteenth-century philanthropists really just hypocrites, seeking to use charity as a front for their own gain? There have been some historians in recent times who have ascribed more worthy functions to nineteenth-century philanthropy; Davidoff and Hall have pointed to the way in which it enabled middle class women to break out of the confines of the domestic sphere and Haskell has argued that a growing understanding of causality led the upper and middle classes to genuinely feel for those who needed their help. This essay shall seek to prove that there was no single social function of philanthropy that can explain its existence, the picture being more complex than many analyses have given credit for. In addition it shall attempt to look at how the obvious social functions modern historians have been able to observe of nineteenth-century philanthropy may not be the ones contemporaries felt it was performing. 

Perhaps the most cold hearted social function nineteenth-century philanthropy has been credited with is advertising and bolstering social status. It was for this reason, the sceptical have argued, that the upper and middle classes were willing to devote large amounts of time and energy to philanthropic causes. These philanthropists then advertised their good works, one example of this is the vogue for wearing the Wedgwood designed cameo of the Society for the Abolition of the Slave Trade. Not only did this involvement, perhaps subscribing to a society or sitting on a committee, imply economic status, in that one had sufficient funds and leisure to undertake such activities, it also advertised one’s moral status; something which was especially attractive in the face of the strict religious mores in place in the early and mid-nineteenth century. With this in mind it is unsurprising that, as Oldfield points out, many of the early philanthropists belonged to particularly charity minded religious orders, the Quakers for example had petitioned parliament to abolish slavery as early as 1783. In a society where heaven and hell were still very real concepts many saw philanthropy and charitable endeavour as the best way to safeguard their own souls as well as reform others. Yet, as logical as this social function seems to be, there are fundamental problems with such an interpretation of the role philanthropy played in the lives of the upper and middle classes. Most striking is the fact that much charity was kept anonymous; Dr. Barnardo, for example, only printed the initials of donors and in any case much charity remained informal in spite of the ever growing number of societies. Evidence of this is ample in the records of the Charity Organisation Society, founded in 1869 as a means of organising the administration of charity; COS officials despaired of ever convincing some that it was better to give money to COS than directly to the poor. Another important fact to note is that not all charity originated with the upper and middle classes. Prochaska points to the old tradition of kinship and neighbourly charity amongst what we would consider the working classes. They also contributed to formal charity; the Christian Mothers Magazine commented that the donations of the poor whether in terms of the aggregate amount, or in proportion to their total wealth, were ‘beyond all comparison the most important. Whilst, then, the display of social status, both economic and moral, was undoubtedly an important social function of philanthropy for some, the anonymity of much charity and the recognition that any status it conferred must be shared with the lower orders meant that it was not the dominant social function.

Almost as equally widespread is the belief that the main social function of nineteenth-century philanthropy was to alleviate the guilt of the wealthy over their good fortune. Even contemporaries felt this was often the driving force behind philanthropy; Engels, in his ‘Condition of the Working Classes’, claimed that the middle classes were vampiric, sucking ‘the wretched workers dry’ then afterwards throwing ‘miserable crumbs’ of charity to quieten their consciences. Certainly one of the major contemporary criticisms of philanthropy recognised by historians such as Harrison is that it was insufficient; statistics show for example that late Victorian colliery disaster-funds could provide for the families of only an eighth of the men killed in the coal-mining industry. Even when the money was forthcoming some felt it was to the detriment of the working classes, whether through the demeaning process by which the recipient had to prove they were a member of the ‘deserving’ poor, or as the Northern Star felt in 1842 that charity from employers was being funded by ‘deliberately underpaying their employees’. Such a sceptical view of the motives of the upper and middle classes of the nineteenth-century has only been encouraged by much historical research of the period; the growth of women’s history in the 1960s and 1970s drew attention to the hypocrisy of the Victorian ‘double standard’, epitomised perhaps most strikingly in the 1857 Matrimonial Causes Act which states that adultery on the part of a wife is grounds for a divorce, but not on the part of a husband. It is understandable that modern historians should assume the same hypocrisy in other areas of nineteenth-century society. Yet, we should be cautious in accepting the situation as black and white; Haskell has convincingly, although with needless complexity, argued against the idea of mass self-deception during this period. This suggests, not that philanthropy was a purely altruistic undertaking but, that the motives behind philanthropy were far more complex than is often assumed. For some philanthropy was almost all consuming; Thomas Clarkson for example has been described as ‘single minded and obsessive’ over his involvement with the anti-slavery movement, driven by a sense of the unjustness of the institution. For most however philanthropy offered a complex web of social functions, of which appeasing a sense of guilt can have been only a minor thread.

If, then, the advertising of social status and alleviating the guilty burden of prosperity were not the main functions of nineteenth-century philanthropy perhaps an equally pragmatic, but less morally dubious, function was its true attraction. I allude to its fostering of a sense of community, something which, although often overlooked in the historiography of the topic, must have had increased appeal given the urban expansion witnessed in this century. Regardless of the intricacies of demographic arguments it is nevertheless safe to say that urban populations were growing; between 1750 and 1801 for example the population of Manchester alone more than quadrupled from 18,000 to 89,000. Urbanisation has often been considered for its role in creating the need for philanthropy, in eroding the close paternalistic relationship between lord and tenants, and kinship bonds between families. But it wasn’t just the working classes who were affected; the middle classes in particular were forced to carve out an identity for themselves, something which has been much documented in relation to nineteenth century class formation by historians such as Wahrman. The formal structure of philanthropic societies could provide a network of people of similar status and interests; Davidoff and Hall have drawn attention to how such societies provided communities along the gendered ‘separate spheres’ model, something which will be considered more fully later in this essay. For now it is suffice to point out that societies provided men with useful experience for the political world and influential contacts; many societies invited members of the aristocracy to act as patrons or chairmen for example. Women could use philanthropy as a front for making friends or even finding a husband; charity balls and dinners could provide opportunities for the sexes to mix as well as by more informal means, door to door fundraising for example was often an excellent method of meeting new people. Similarly criticisms levelled at nineteenth-century philanthropists, such as the advertising of their involvement in charity, may well have been more to do with advertising their belonging to the society. The women who wore anti-slavery cameos and refused to buy West Indies sugar in the 1780s, regardless of their true sympathies with the cause, were explicitly showing their involvement in a wider community. This focus on community is equally as clear in the remit of the Charity Organisation Society, set up to artificially recreate the community of older times where charity had not been from ‘strangers to strangers.’ The point here is not that philanthropy had no other social function than fostering a sense of community, but that it is undoubtedly a factor that had some bearing on contemporaries and provides evidence of the complex nature of the relationship of the nineteenth-century public to philanthropic activity.

Returning to the arguments of Davidoff and Hall there is the interesting idea that one of the major social functions of nineteenth-century philanthropy was the ways in which it reinforced the dominant gender ideology of ‘separate spheres’ in upper and middle class rhetoric. In the early nineteenth-century most philanthropic societies were strictly male only; they were seen as a new way of ‘asserting manliness’, enabling men to gain experience in the public, political, sphere which could later be put to use at municipal, or even parliamentary, level. Even when permitted into the society women often needed to be guaranteed by a man, as in the Botanical and Horticultural Society, and even in the late Victorian period most still barred women from positions of power, such as on the committees. This position may seem strange when the high levels of female involvement in philanthropic causes is remembered; the Bible Society alone had over 10,000 women agents by 1820. Yet in some ways women’s involvement resulted in little conflict with the ‘separate spheres’ ideology; it was widely believed for example that women were inherently more pious and given to charity than men. Elizabeth Fry justified her own philanthropic activities by claiming that women had a vocation in helping their own sex, similarly women volunteers were seen as essential in dealing with children (in orphanages, or the increasing number of voluntary schools) and women. It was perhaps the focus on women that tempered the shocking sight of Josephine Butler speaking publicly on the problems of prostitution and the evils of the Contagious Diseases Act of 1864. It cannot be ignored however that philanthropy, in many ways, was explicitly subverting the ‘separate spheres’ ideology; it provided employment, in some cases paid, for women. As already discussed these societies were providing upper and middle class women with a sense of community outside of the family; this must have been particularly gratifying for middle class women from strict religious backgrounds who were simultaneously barred from paid employment and the frivolous amusements of the upper classes. The way in which philanthropic work could provide women’s lives with a sense of purpose is shown in the case of the invalid daughter of a Birmingham banker who was allowed to oversee a day school for working class girls because, in the words of her sister, ‘she could enter into so few amusements.’ Philanthropy, then, undoubtedly played a part in the rise of first wave feminism in the late Victorian period as women put into practice the skills, such as fund-raising and propaganda, they had learnt through their involvement in philanthropic societies; but for most of the nineteenth-century the most prominent social function philanthropy fulfilled in respect to the gender divide was to provide an important outlet for educated middle and upper class women frustrated with the domestic sphere to become involved, at least on a small scale, in the public sphere.

Following on, then, from this idea of women finding a way to exert their influence through philanthropy we come to perhaps the most important, in hindsight, social function of nineteenth century philanthropy. The idea that philanthropy offered a means for the middle classes to exert social control over the lower orders has become increasingly popular in recent years, not least because it was a means of exerting power without needing access to the formal mechanisms of politics which were barred to the middle classes until 1832. Advocates of this view such as Fido point to the contemporary distinctions made between ‘deserving’ and ‘undeserving’ poor; the Charity Organisation Society sought to consider the applicants for charity along the lines recommended for official poor relief under the 1834 Poor Law Amendment Act. Through casework - the Society was handling 25,000 cases per annum in London alone by 1886 – those involved with COS hoped to weed out false claimants. Notoriously heavy handed, assistance was routinely refused if applicants failed to comply 100 per cent with the COS investigation into their background for example, the impact of the work of the COS and like minded philanthropists was still seen amongst themselves as worthy of the gratitude and deference of the lower orders. Certainly it created an atmosphere where members of the working classes were forced to at least be seen to emulate the dominant middle class values if they were to have any hope of receiving charity; this meant in practice that families attempted to look ‘respectable’ in the face of impoverishment and the adoption of the male breadwinner ideology. Before the nineteenth century working class men sought to work for themselves, to rely on their own labour for survival; by the mid-nineteenth century they had appropriated the concept of the ‘family wage’ earned in the employment of another. It would seem then that the middle classes in particular were using philanthropy as a means of social control, a way of imposing their values down the social scale. Yet, as Prochaska has argued, the working classes did not passively accept these values; anecdotal evidence suggests that looking respectable was something that only need matter on the day the voluntary caseworker paid their home visit. More importantly, far from creating a grateful and docile labouring class, the harsh ‘cruel to be kind’ approach of nineteenth-century philanthropists engendered a sense of class tension and resentment. The working classes were suspicious of philanthropists ‘meddling’ and suspected less than pure motives for middle class philanthropy, the Northern Star, as quoted earlier claimed employers funded their philanthropic activity by underpaying workers – in turn forcing many of them to then rely on the sympathy of those unscrupulous employers; taking us back to philanthropy as a means of social control.

Before making a decision either way as to whether philanthropy was a means of social control the question must be asked, were nineteenth century philanthropists actively seeking to use it for this purpose? Haskell has argued that philanthropy emerged as a result of a more sophisticated understanding of cause and effect; that is to say that in the eighteenth and nineteenth centuries the expansion of the economic market enabled people to realise that their actions had profound consequences, that the two were explicitly linked and could be controlled by their restraint or lack thereof. This argument would suggest that as well as realising that by choosing not to introduce safeguards on machinery, for example, they were responsible for any accidents occurring as a result, nineteenth-century people were equally as capable of recognising the advantages of social control they could gain by closely controlling the allocation of charity to those who fulfilled their own criteria – those who had the middle class values of thrift and temperance for example. Woolman, an eighteenth-century abolitionist Quaker minister, had claimed that slaveholder’s were only immoral if they had recognised that what they were doing was wrong; I think that in this idea we might find some salvation for the memory of nineteenth-century philanthropists. That their charity was often prejudiced cannot be denied, Dickens despaired in the 1830s that the wealthy would sooner give money to those in far-flung Borrioboola-Gha than the needy on their own doorstop. But it is doubtful that the majority saw the situation in the same light; the inhabitants of Borrioboola-Gha were open to reform, could be converted to Christianity through the work of missionaries funded by the Bible Society. The domestic pauper population by comparison were regarded in many ways as being beyond redemption. The ‘undeserving’ poor, it was felt, in accordance with prevailing religious opinion could not benefit from charity; they had to learn the lessons for themselves. Philanthropy did provide a means of social control, but it was not its conscious aim; what seems harsh to modern eyes was seen as a necessary – if unfortunate – evil, at least until the later years of the nineteenth-century. 

In conclusion then there was no one single social function of philanthropy for the upper and middle classes of nineteenth-century Britain. Doubtlessly it provided a means of advertising wealth and status although, as I have argued, this was its primary function for only a very small minority. Similarly whilst it may well have fulfilled a need for some prosperous members of society to lighten their conscious over the disproportionate spread of wealth in nineteenth-century Britain, philanthropy provided a wide range of other functions which were more significant to its growth. It provided a sense of community in an ever expanding world and a means for both men and women to achieve a sense of purpose. For women it provided an outlet for expression in the public sphere which might otherwise have been entirely closed to them. The most striking social function of nineteenth-century philanthropy in retrospect is the way it provided a means of social control; as the traditional social structure based on paternalistic landlords and dependent tenants broke down philanthropy secured a way in which it remained in the interests of the lower orders to be deferent to the upper and middle classes. By appropriating elements of the middle class value system the working classes were able to first engage the sympathies of the philanthropists and later to acquire power in their own right, respectability being one of the pivotal factors in the fight for working class suffrage. However this essay has sought to show that this was not a function contemporaries saw clearly; what to modern eyes is ‘social control’ was to these upper and middle class philanthropists a way of improving the lives of the lower orders. The social functions of philanthropy were many and complex but were underpinned by the idea that they were doing some good for society, that the working classes could be reformed and so a more perfect society achieved.


  • F. K. Prochaska, ‘Philanthropy’, in F.M.L. Thompson (ed.), The Cambridge Social History of Britain (1990), vol.3. 
  • T.L. Haskell, ‘Capitalism and the origins of humanitarian sensibility’, parts I and II, American Historical Review 90 (1985). 
  • R. J. Morris, ‘Voluntary societies and British urban elites 1780-1850: an analysis’, in P. Borsay (ed.), The Eighteenth-Century Town (1990) or in Historical Journal 26 (1983). 
  • B. Harrison, ‘Philanthropy and the Victorians’, in his Peaceable Kingdom: Stability and Change in Modern Britain (1982). 
  • J. Fido, ‘The charity organisation society and social casework in London 1869-1900’, in A. P. Donajgrodzki (ed.), Social Control in Nineteenth-Century Britain (1977). 
  • T. W. Laqueur, ‘Bodies, details, and the humanitarian narrative’, in L. Hunt (ed.), The New Cultural History (1989). 
  • L. Davidoff and C. Hall, Family Fortunes: Men and Women of the English Middle Class 1780-1850 (1987), chapter 10. 
  • J. R. Oldfield, Popular Politics and British Anti-Slavery: The Mobilisation of Public Opinion against the Slave Trade 1787-1807 (1995).

Cambridge Sanctuary Article, 2009

[A/N: This was in the 2009 Lent term edition of the Cambridge Sanctuary (a satirical student paper). One of my friends was editor at the time and so we were all forced to write something; I suppose that makes this as close to journalism as I've ever got...] 

Women’s colleges: what is the truth? Are they the bastions of militant feminism we all fear, or the communal shower fun we all dream about?

Luckily we – M.M.A.F. (Manly Men Against Feminism) – were in a position this month to answer this question once and for all. Invited to conduct an impartial study we arrived with a list of carefully pre-prepared questions for consideration. What is the real story behind shared rooms? How many female Cambridge undergraduates does it take to change a light bulb? And, are there any plans to consummate college marriages on a pay-per-view basis?

First impressions seemed only to confirm our fears. A whiteboard behind the head porter’s desk implored us to warm ourselves through ‘the vigour of hard work’ rather than by standing in front of an oven. Things went from bad to worse when we discovered the four foot high saucepan on display was, in fact, a symbol of feminine liberty and not what we had assumed. Apparently there is no requirement for girls to learn en masse how to have our tea ready on the table by 5pm. We at M.M.A.F. by no means condone the doctrines of the B.N.P. (we abhor all left-wing political groups), we were however rather forcefully reminded of their comments on politically-correct indoctrination and the resultant degeneration of the British way of life.

However this, as is so often the case, was a situation where first impressions were simply misleading. On closer inspection it soon became clear that far from creating a breed of bra-averting, jack boot wearing feminists, female only colleges are in fact doing more than any other institution (exempting, perhaps, Sharia law) to keep the rightful hierarchy of nature in tact.

After a nervous wait in the college’s austere confines for our interviewees we were quickly put at ease. The delay had not, as we had feared, been the result of a misguided mission to prove female superiority by reducing us to anxious husks of our former selves. Rather, a last minute flurry of cosmetic touch-ups and wardrobe changes had prevented their punctual arrival. Our visit, it would seem, was something of an event for the girls; eligible males under the age of forty-five are, after all, something of a rarity in such an environment. Amidst much giggling and hair twirling we were able to uncover something that would shock the dour old bags who fought tooth and nail for continued female segregation: all they have achieved is to make their students that much more eager to please. 

When asked whether Naomi Wolf was on the right track with her critique of the media - an institution in the chokehold of male tyranny - X told us: ‘I totally admire her, she always looks amazing!’ Y nodded in agreement before adding; ‘There’s a women’s section in the library, just dedicated to anorexia and stuff. It’s so useful; before I came here I didn’t even know it was possible to survive on less than 500 calories a day.’ The girls laugh, before Z adds, in a more serious tone; ‘Of course we’re not saying everyone should try and starve themselves. It’s just that we have so much more competition than the girls at other colleges. I swear, if it wasn’t for the gag reflex I wouldn’t have had a date since Fresher’s week, not with these thighs!’

As the interview continues it becomes clear that attracting a man remains the number one priority for these twenty-first century girls. Students work is put on the back burner as time and energy are instead lavished on creating colour co-ordinated outfits and facebook stalking boys from lectures. Z told us: ‘Cambridge is the perfect place to find a husband. But, we have to work so much harder to attract the guys’ attention. When you live on the same corridor you’re just available. Ending up in bed with each other at some point is almost inevitable. Murray Edwards is so far away from the clubs that even if you do pull, you’re both sober by the time you get back to your room.’ Y agreed, ‘It’s true. Some of these guys are dating the most hideous girls. All mono-brows and hairy legs. It’s just by merit of their being around them all the time that they even stand a chance.’

M.M.A.F. asked the girls if this – intense – level of competitiveness ever led to problems between themselves. Our interviewees looked awkwardly at each other before X finally admitted that it did. ‘At the end of the day, it’s a dog eat dog world out there. Friendship might last forever, but if one girl is dragging the other’s down, well, it’s time to say sayonara.’ Y continued; ‘we used to hang round with this other girl, U, but she had a gammy leg and bad hair and put all the guys off. We just “forgot” to tell her when we were going out until she got the hint. I mean, what else could we do?’

It would seem then, that just as an all male environment has the power to turn us into salivating wrecks at the sight of a skirt, (or a kilt, we have all been there) so an all female college reduces its occupants to frenzied over-excitement at the faintest whiff of testosterone. We spoke to Mr. B. Cockburn, one of the city’s most seasoned flashers, who told us that he has been forced to drop both Murray Edwards and Newnham from his nightly circuit. ‘They get worse every year; back in the day they were used to not having the male company, you know? Now it’s a different story. They go stir crazy in there! I was just minding my own business, dangling my tackle in the ornamental pond, when a group of them set upon me. Like bloody vultures. I shouldn’t be surprised if I’ll need therapy.’

Our investigation drew similar conclusions. In under an hour we managed to collect over 30 mobile numbers and promises for seven formal swaps; try getting that kind of result in a co-ed institution! And finally, time for the question you have all been bringing yourself to ‘la petite mort’ to have answered, namely: in the absences of the real thing are the girls forced to turn to each other in desperation? We spoke to the college LGBT rep for the low down. M.M.A.F. were given a mouthful, sadly of the verbal variety, telling us that lesbian identification had nothing to do with an inability to get a man. As some wise guy once wrote, ‘The lady doth protest too much, methinks.’ We invite you to draw your own conclusions.

Friday, 29 August 2014

A Good Death

A/N: Part II, paper 13 essay written in Michaelmas term of the 2009/10 academic year. 

What Constituted a 'Good Death' in the Middle Ages?

If medieval religion was obsessed with death, as historians have long asserted, it was perhaps even more acutely concerned with ‘good death’. This was a death that conformed to certain expectations, both religious and social. Understandings of what constituted a ‘good death’ did not remain entirely static across the period, reflecting changes in theology and social structure. However by the fifteenth century was even a growing demand for ‘handbooks’ on how to die, such as ‘The Art of Dying Well’. Yet even with this apparent codification of the ‘good death’ there remained inconsistencies. The Church and the people were not always in agreement.

The cornerstone of a ‘good death’ in this period was a death in which the dying person’s faith was above suspicion. Temptation bore heavily on the dying, the deathbed becoming the scene of a battle between the forces of good and evil for the soul of the dying. The belief that this battle was happening around them can be seen in the way people interpreted what the dying person said. Ailred of Rievaulx’s fevered rambling in 1167 was believed by those around his deathbed as a dialogue with angels for example. If the temptations planted by the devil, such as impatience and vainglory, could be overcome then, as Richard Rolle’s The Book of the Craft of Dying claimed, ‘The day of a man’s death is better than the day of his birth.’ Yet, as St. John had made clear, ‘He who does not believe is already judged.’ This understanding can be seen in the treatment of the non-Christian dead. Jews, Pagans, ex-communicates, and even unbaptised infants, were all barred from burial in consecrated ground. In the mid-thirteenth century Pope Alexander IV even went so far as to rule that anyone who knowingly buried a heretic in consecrated ground would themselves be ex-communicated. To die convinced of the Christian faith was of the utmost importance.

It was not enough however to just assume that the dying had faith. Instead they had to prove it. A long list of questions and answers, ranging from whether the individual believed in God to their willingness to amend their ways if they should recover, were prescribed. Beyond this the dying were expected to make confession on the deathbed and, in return, receive the sacrament of anointment and the viaticum. The importance of making confession can be seen in the ruling during intense plague years that emergency confessions could be made to lay men, or even lay women. The anointing of the body too was believed to safeguard the corpse from being used as an instrument of the devil. Deathbed rituals readied the soul for its journey, and provided a time frame for the dying and those attending the deathbed. In spite of ecclesiastical reassurance many lay people continued to believe that those who were administered these last sacraments and survived, would go on to lead a kind of half life, prohibited from eating meat or engaging in sexual intercourse. So far we have seen that a ‘good death’ comprised of proving one’s faith, and the following of ritual.

The lengthy obligations tradition demanded of the dying hints at another component of the ‘good death’, namely that it be sedate and somewhat prolonged. Sudden death was feared by Christians, the popularity of St. Christopher being largely the result of his status as a protector against it. It meant that people did not have time to arrange their affairs, or make confession and receive the sacraments. In literary examples of ‘good’ deaths the dying often make long speeches, as Cardinal Wolsey does in George Cavendish’s ‘Life’, and have time to pass on advice and make wills. They are aware of death approaching and might, like monks, ask to be laid on the floor or dressed in a hair shirt as a symbol of piety. Even at the everyday level wills were routinely made on the deathbed; the highly stylised conventions of medieval wills suggest that those attending the deathbed made sure to prompt the dying into remembering what was expected of them in such documents. As Binski argues a ‘good death’ in this period was ‘domesticated and regulated’.

Equally as important were the contents of these vital documents. As the doctrine of purgatory became more clearly defined over the eleventh and twelfth centuries the importance of provision for alms and charity in wills grew. Now it seemed more likely that everyone would have a chance of eventually reaching heaven, not just the like of monks and nuns. These final acts of kindness were thought to help ease their souls through the pain of purgatory. For this reason the better off also organised trentials of masses for the anniversary of the death, the poor gave money for votive lights or other small gifts as a way to get their name on the bede roll. In fact such provision was deemed so important that many organised their wills years before their death. Edward the Black Prince established his chantry chapel, to say mass for his soul in perpetuity, in 1363 – 13 years before he died. It was also a time to organise funerary arrangements. Philip Repingdon asked for a pauper’s burial in 1424 as a sign of his piety, whilst John Paston I’s family spent over £200 on his funeral in 1466. If one died a ‘good death’ there would be time to deal with all their remaining worldly affairs so they might concentrate solely on what was to come in their final moments.

Acceptance of death can be inferred from all this as a component of a ‘good death’. The Art of Dying Well, a popular pamphlet of the fifteenth century, advised that ‘one should never give a sick person too much hope that he will regain his physical health’. This might prevent the dying from properly preparing for death. Examples of those who accept death calmly, even with welcoming arms, are praised in medieval literature. The death of Abbot William of Fecamp in 1031 for example was described in glowing terms by Ralph Glaber. The abbot, knowing his time was short, dispensed advice and then retreated to his rooms to wait patiently for death. This correlates with The Book of the Craft of Dying which advised the dying to pray, cry out and weep (in the heart), commit the soul to the father, and then willingly give up the spirit. Just as Christ did on the cross. In this way the process of death could be, to an extent, understood in a time when it was often not even certain if death had occurred.

For Philip Aries this regulation of death represented a ‘tame death’, a death that was accepted and carried out with the minimum of fuss and emotional attachment. However there is much to suggest that this ideal death discussed here, the death of the prescriptive literature, did not always correspond to the reality. For example in the guide books the family and friends at the bedside grieved with restraint, the dying person themselves quietly accepting their fate. Yet evidence shows this was not always the case. Margery Kempe described how she was often invited to attend the sickbeds of the dying; her tendency to exaggerated grief, wailing and howling, was certainly not what the Church advised for such scenes. The dying too could be less than resigned to their fate. Cardinal Wolsey only became accepting in his final moments, having argued with his physicians and servants that he was going to recover. In the Danse Macabre this is exemplified in the peasant’s admission that “I have wished after death full oft, although I would have fled him now.” It is apparent that the ideal, the ‘good’ death did not always reflect the reality of death.

In some instances there was disagreement between social groups over what constituted a ‘good’ death. For the Church a sudden death, as discussed, was the marker of a ‘bad’ death. Yet for some of the military men of the aristocracy a quick death on the battlefield was preferable to the sedate, domesticated death encouraged by the Church. Even in instances where a complete consensus might be expected there are complications. The death of criminals was likened to the death of Christian martyrs and, indeed, if they confessed and requested the sacrament of the viaticum executed criminals could be buried in consecrated ground. Even if the Church could not absolve their sins there was still hope that God might take mercy upon their souls, they were not beyond help. Again with cases of suicide, the worst death in medieval understanding, the individual would not necessarily be denied a proper burial within the boundaries of the graveyard. Concessions were made for those who were not mentally sound at the time of the act. If consensus could be reached over what constituted a ‘good’ death, it was also not always clear cut what constituted a ‘bad’ death.

In conclusion the ‘good’ death of the middle ages was, to a large extent, a literary construct; an idealised version of something which was inevitably an individual experience. There was sometimes disagreement over what constituted a good death, as in the case of the militaristic aristocracy. A sudden death, in their opinion, was not always bad. Whilst a majority might have strove to conform with the Church’s prescribed way of death, and the popularity of death ‘manuals’ from the fifteenth century seems to suggest this was the case, the reality was often quite different.


  • P. Aries, Western Attitudes towards Death from the Middle Ages to the Present (1974) 
  • N.L. Beaty, The Craft of Dying: a study of the literary tradition of the ars moriendi in England (New Haven, 1970) 
  • G. Cavendish, Life of Cardinal Wolsey 
  • D. Crouch, ‘The culture of death in the Anglo-Norman world’, in Culture and the Twelfth-Century Renaissance, ed. C. Warren Hollister (Woodbridge, 1997) 
  • C. Daniell, Death and Burial in medieval England (1997) 
  • P. Jupp & C. Gittings (eds), Death in England, an illustrated history (Manchester, 1999), chaps 3-5. 
  • M.R. McVaugh, ‘Bedside Manners in the Middle Ages’, Bulletin of the History of Medicine 71 (1997) 
  • B. Poschmann, transl. F. Courtenay, Penance and Anointing of the Sick (1969) 
  • J. Shinners, ed. and transl., ‘The Art of Dying Well’, in Medieval Popular Religion 1000-1500: a reader 
  • R. Swanson, ed. and transl., ‘The Book of the Craft of Dying’, in Catholic England: faith, religion and observance before the Reformation (Manchester 1994)

Thursday, 28 August 2014

Death and the Reformation

A/N: Part II, paper 13 (ie. "Death in the Middle Ages" essay written in Michaelmas term of the 2009/10 academic year.

How Fundamentally Did The Reformation Change Society's Response To The Dying and The Dead?

The early sixteenth century saw the beginning of the English Reformation, a process that had succeeded, at least temporarily, in separating England from the Roman Catholic Church by 1536. For historians such as Dickens this represented a victory for the masses; dissatisfied with incompetent and immoral clergy, Dickens claimed the impetus for the English Reformation came from below. If this were the case it would seem to suggest that society’s response to the dying and the dead would change significantly, as the customs and traditions of the old religion were swept away. However it is now apparent that the Reformation was a more piecemeal affair. Beliefs changed only slowly, customs, in many cases, even more so. As a result there is often a discrepancy between the way in which people responded to the dying and the dead in the aftermath of the Reformation, and the reasoning behind it.

Pre-reformation religion has often been accused, in the words of Galpern, of being a “cult of the living in the service of the dead”. Death permeated almost every aspect of the medieval world as a result of the Church’s insistence that one’s own mortality should never be far from one’s mind. To this end the dying were treated in very specific ways. The fifteenth century ‘guidebooks’ on how to die, such as the Ars Moriendi, explained how those present at the death bed should ensure that the dying person made an explicit statement of faith, and received the last rites. The advent of Protestantism might be expected to have changed this in two ways. Firstly, the sacrament of the last rites was abolished, branded as nothing more than popery. There was also a shift away from placing such high levels of importance on making declarations of faith.

The reason for this was the slow demise of the theory of purgatory. Whereas in the later middle ages a person might reasonably expect their soul to go to purgatory when they died, to await the verdict of the last judgement, by the 1530s there were serious doubts about its existence. Tyndale, for example, claimed that as purgatory had no grounding in the bible it was an invention that served only ‘to purge thy purse’. The Chantry Act of 1547 seemed to confirm this, sweeping away over two thousand such foundations. As one could no longer hope to purge their worldly sins after death, emphasis moved away from the deathbed and onto leading a pious and virtuous life. Faith became something to be proven in everyday actions, meaning the deathbed became less of a trial for the dying.

Similarly fundamental shifts can be seen in attitudes towards the corpse itself. Evangelical fervour for instance was adamant that the pomp and ceremony of late medieval burials be done away with. They were a symbol of vanity. In 1588 for example girdler, Richard Walter, refused even to allow the preaching of a sermon at his funeral, claiming that it was just ‘superstition’ and would serve no purpose. Yet, lavish burials remained popular amongst the better off; perhaps nowhere better exemplified than in the ostentatious royal funerals of the period. However the justification behind such displays had subtly changed. Instead of principally reminding people to pray for their souls, the extravagant tombs of the wealthy served almost exclusively as markers of worldly status. It was important that the funeral was fitting and ‘proper’ for a person of their status, and pains were taken to ensure this more for the sake of their living friends and relatives than for the dead themselves. Increasingly it was accepted that the dead could not benefit from the actions of the living, a radical and fundamental shift away from medieval understandings.

Again, the issue of where an individual was to be buried changed in emphasis. Before the Reformation people wanted to be buried near the Church altar, so that they might receive the benefit of proximity to the holy wafer during mass. Post-Reformation burial requests, unsurprisingly given the removal of mass from Anglican Church activity, tended to have a different focus. John Veron claimed that ‘wheresoever we are buried, we are buried in the Lord’s earth’. If specific demands were made, they were often for burial near to what had been the deceased’s customary pew in Church. This again suggests a preoccupation with stressing how pious an individual had been in life, rather than with how to ease their suffering after death as had been the case in the later middle ages.

Further examples of changing beliefs can be seen even in the survival of old Catholic customs surrounding death. The tolling of a bell to let the community know somebody had died continued for decades in many places. In the Catholic tradition this was to encourage people to pray for the soul of the deceased. The post-reformation practice on the other hand seems to have been more about common expectations of the ‘proper’ procedure to follow on someone’s death. As late as 1623 William Reade, minister at Cropredy, Oxfordshire, offended parishioners by refusing ‘the ringing of a peal’ for local widow, Margery Winter. There was no theological basis to people’s discontent; rather they felt that Reade was denying Winter the basic rights of the dead. The practice of sitting with the dead body overnight proved similarly difficult to stamp out. For the Church it symbolised superstition and dubious theology, but for those involved it was more of a customary expectation of things that ought to be done for the dead.

Post-Reformation wills might be expected to differ radically from those of the late medieval period. Then the primary concern of testators was often provision for their souls. This could include bequests of money and gifts to the Church to ensure their soul was prayed for, to ease their time in purgatory. With the removal of purgatory from the popular mindset, and the abolition of the chantries this became both an unnecessary and a difficult request. During the reinstatement of Catholicism under Mary, less than a generation since its original suppression, Sussex schoolmaster Gabriel Fowlle asked in his will for ten priests to sing masses for him. Importantly however was his comment that this should be done ‘if they can be got’, suggesting that resources just could not meet demand. What we see generally post-reformation are requests for sermons instead of for masses. Alderman William Dane provided for a sermon every Sunday for thirty weeks in 1563 for example. This implies a fundamental shift in society’s response to death and the dying, it was understood that the dead could not benefit from prayers, but the living could reap great reward from the opportunity to hear sermons.
A comparable picture is observable when looking at post-reformation almsgiving and charity. A crucial part of medieval wills, charitable bequests ensured your name lived on, allowing the living to pray for your soul. Because of this connection with purgatory some people felt the practice should be forgotten. In 1604 for example alderman Richard Goddard refused to provide a distribution of alms at his burial, arguing that such an action was ‘but a popish imitation of such as were desirous after their death to have their soul prayed for.’ This was a fairly extreme view however. Far more typical were charitable donations made to help the ‘deserving’ poor. To the medieval mind, the prayers of paupers were powerful because they did not carry sins such as avarice as heavily as the better off. With the removal of concerns over intercession for the soul we see a greater trend towards such distinctions as ‘deserving’ and ‘undeserving’. This reveals the fundamental shift in why charity was being given. Rather than to help the soul of the dead, it was to help the living. The undeserving poor, it was felt, would gain little from charity and so could be excluded.

In conclusion the Reformation resulted in a fairly fundamental shift in society’s response to the dead and the dying, particularly in relation to beliefs. Customs such as bell tolling for the dead, and almsgiving in wills continued but the reasoning behind the actions changed. With the demise of purgatory as official doctrine people ceased to tailor their responses to death to easing the pains of the soul. Instead the focus was on the living. Funeral sermons could offer nothing to the dead, but offered comfort to the living. Similarly requests for obits, trentals and other memorials for the dead were replaced with improving sermons for the good of the living. In a way post-reformation society saw the dead and the dying as providing a service for the living, rather than the other way around. 


  • P. Marshall, Beliefs and the Dead in Reformation England (Oxford, 2002) 
  • D. Cressy, Birth, Marriage and Death: ritual, religion, and the life-cycle in Tudor and Stuart England (Oxford, 1997) 
  • A. Kreider, English Chantries: the road to dissolution (Cambridge, 1998) 
  • N. Tyacke (ed.), England’s Long Reformation, 1500-1800 (1998) 
  • S. Wabuda, Preaching during the English Reformation (2002), Intro.

Tuesday, 26 August 2014

Muslim Identity 1850 - 1920

Part II, Paper 26. Essay Topic 16: Recasting Religion: Muslim Identity 1850 – 1920. Week 5 essay, I was supervised at Churchill by Leigh Denault.

Why and in what ways did Muslim politics emerge as distinctive after the mutiny of 1857?

Muslim politics was to become very important to the history of India, even leading to the creation of separate political states in the twentieth century. However this outcome was by no means seen as inevitable in the years after the first war of independence in 1857. Muslims may have been singled out by the British as the instigators of what they called the Indian Mutiny, but Muslim campaigners were just as likely to work with Hindu political activists, as against them. Indeed Muslim figures played a big role in the early National Congress. This essay will seek to show that, nevertheless, there were some ways in which Muslim politics were becoming distinctive after 1857.

There had been a significant Muslim population in India for centuries by the mid nineteenth century. The continent had been ruled by Muslim emperors from the sixteenth century, as part of the Mughal Empire. Indeed the British method of exerting power from behind a figurehead traditional ruler meant that Muslims still filled many elite positions within Indian society. Bahadur Shah retained the title of King, for example, even as the area he had actual control over contracted massively. Although tensions rose as it became clear his privileges would not be passed on to his heirs. The Muslim population were afraid of losing their advantages within society.

During the mutiny of 1857 many Muslims saw an opportunity to restore the Mughal Empire, and so regain the authority that had been eroded by European powers, particularly the British. Muslims rallied behind the figure of Bahadur Shah, determined to see him fully reinstated as emperor. The British were forced out of Delhi and Bahadur Shah held court for the first time in many years. Such actions compared unfavourably in British eyes with, for instance, the Sikhs of the Punjab who remained loyal and helped the British regain control of Delhi. The massacre at Kanpur, where dozens of British women and children were murdered, served to confirm the British view that Muslims had incited the entire rebellion and could not be trusted.

Robinson claims that one of the main reasons for separatism among Indian Muslims identified by historians is deliberate division of society by the British. The argument is that the desire for revenge after 1857 led the British to treat Muslims very harshly. By singling the Muslim community out a sense of identity then began to emerge in response. Certainly in the days after the recapture of Delhi punishments were severe. At Kanpur some Muslim sepoys were sewn into pig skins before being hung, something which punished both the body and the soul as pigs are deemed to be unclean by the Muslim faith. Bahadur Shah was exiled to Burma, and anti-Muslim sentiment remained strong for the rest of the year. Even in 1871 W. W. Hunter felt there was concern enough to make his writing ‘The Indian Musalmans: Are They Bound in Conscience to Rebel Against the Queen?’ worthwhile. 

However there is little evidence that this ill-feeling continued to cause substantial problems for Muslims in India. In 1859 Sayyid Ahmad Khan wrote a critique called ‘The Causes of the Indian Mutiny’ which highlighted Muslim fears that British education was attempting to force Christianity upon India. Officially the British resolved to be more conciliatory. They gave financial aid to the Aligarh Muslim University founded by Khan in 1885 for example. In addition large numbers of Muslims continued, as they traditionally had, to serve in the civil service. The 1881 census revealed there were over 50 million Muslims in India; if they were believed to be causing a serious threat surely British reaction at this time would have been much stronger.

Other historians have argued that Muslim politics began to emerge as a distinctive entity in response to a growing sense of Hindu nationalism. The 1860s saw the work of Ishwar Chandra Vidyasagar attempt to consolidate the prominence of Bengali for example, and Michael Madhusdan Dutt reframed traditional Indian mythology into something more coherent. Where before there had been many ‘Hinduisms’ there was now a growing cohesion to the faith. To be Indian was to be Hindu, and vice versa. Clearly this was an exclusive viewpoint, something which many Muslims must have felt strongly.

King’s work on the controversy surrounding government language policy in colonial India exemplifies the tensions. The Islamic elites spoke Persian or Urdu, the latter earning greater prominence when Persian was replaced with English for administration purposes in the 1830s. The British generally preferred Urdu to be used at this time, and there was little encouragement given to vernaculars. However as Hindu nationalism grew stronger there was a feeling that Hindi should be recognised. This was particularly evident in published literature. By 1925 the number of Urdu publications had shrunk from being half of all published literature to one eighth, the equivalent of one sixth of the Hindu output. Such figures suggest that Muslim groups would attempt to form closer connections as a defence against the deluge of Hindu nationalism.

This could explain the setting up of Muslim dominated schools. There was, as already mentioned, the Aligarh Muslim University which styled itself as an Indian counterpart to Cambridge University. In doing so it could be said that the Muslim population was attempting to project respectability to the British, and win support from them which might otherwise go to the Hindu nationalist movement. More notable perhaps is the foundation of the Darul Uloom Deoband in 1866 which strove to educate theologians and clerics in order to spread Sunni Islam. Graduates from the school were instrumental in securing conversions to Islam in Bengal. These schools taught loyalty is Islam, and stressed its importance as a way of life. The schools also created their own sense of community. When Sayyid Ahmad Khan distanced himself from the emerging National Congress in the late nineteenth century, so did his pupils.

However the picture is not as straightforward as it might first appear. The aim of many boys at these schools remained admission to posts within the British administration. So, whilst their faith was important to them, their sense of nationalism was in many ways limited. There are other contradictions. At Aligarh Hindu day students were welcomed, especially in the institution’s early years. Sayyid Ahmad Khan had set up an alternative to the National Congress, the Muhammadan Educational Congress in 1886. But, after his death, his former pupils no longer felt the need to conform to his wishes to keep the two separate. In 1909 the Muslim League and the Muhammadan Educational Congress broke apart, the former moving slowly towards alliance with the National Congress. What is clear is that there were not two polarised ‘nations’, rather there were points on which they disagreed.

This idea is supported by Chris Bayly’s findings on Hindu and Muslim clashes in the early nineteenth and eighteenth centuries. Delhi was mostly free of Hindu / Muslim tension, even though this had been a seat of Muslim power which was being steadily eroded. On the other hand Calcutta saw violence between Hindus and Muslims as early as 1789 at the festivals of the Muhurram and Durga Puja, the Muslim population reacting violently to Hindu prosperity in the region. The strength of separatist feeling continued to differ from place to place into the late nineteenth and early twentieth centuries.

From 1913 there was a new approach visible. The politicians of the Muslim League sought to unite with the National Congress, something which would enable them to work together to obtain a greater say in the way India was governed by the British. In 1916 this arrangement was formalised by the Lucknow Pact. Gandhi became a leading figure in Muslim political policy too, proving that faith was not necessarily completely divisive at this time. Muslim and Hindu cooperation in the Congress was often rocky, but it wasn’t until later that the campaign to truly separate the faith into two nations was began in earnest.

In conclusion Muslim politics emerged as distinctive in the aftermath of 1857 in a number of ways. Education became very important as Muslims both sought to secure their faith, ensuring there were sufficient teachers available for the future. It also enabled them to keep their traditional elite status in the law courts and the civil service, and give them the respectability which was needed in order to be taken seriously by the British. Some policies were exclusive, such as the refusal to join the Indian National Congress, but this later gave way to a more cooperative stance, with the involvement of Congress and the Muslim League. At this time Muslim politics was distinctive in that it worked for Muslim benefit, trying to keep Indian vernacular languages out of official circles for instance, but it was only later that it became truly separatist.


  • Powell, Avril A. Muslims and Missionaries in Pre-Mutiny India, London Studies on South Asia No. 7. Richmond, Surrey: Curzon Press, 1993. 
  • Bayly, C. A. "The Pre-History of 'Communalism'? Religious Conflict in India, 1700-1860." Modern Asian Studies 19, no. 2 (1985): 177-203. 
  • Robinson, Francis. Separatism among Indian Muslims: The Politics of the United Provinces' Muslims, 1860-1923, Cambridge South Asian Studies. London: Cambridge University Press, 1974. 
  • Lelyveld, David. Aligarh's First Generation: Muslim Solidarity in British India. Princeton, N.J.: Princeton University Press, 1978. 
  • Metcalf, Barbara Daly. Islamic Revival in British India: Deoband, 1860-1900. Princeton, New Jersey: Princeton University Press, 1982. 
  • King, Christopher Rolland. One Language, Two Scripts: The Hindi Movement in the Nineteenth Century North India. Bombay: Oxford University Press, 1994.

Monday, 25 August 2014

Gender in Early Colonial India

Part II Paper 26, Week Two. Topic 3: Imperial Transitions: Race, Gender and Culture c. 1757 – 1840.

Why and how was the status of women significant for early colonial reformers?

Studies into women in early colonial India have had a growing place in the historiography since the 1980s. Initially focussing on the European female in a colonial context, today there exist a number of in-depth studies into the importance of gender in the context of early colonial rule. At a time when women’s voices were seldom heard in the public sphere, their status, and the ways in which it was to be protected or enforced, became a key issue for contemporary men. British men could justify their own imposed rule by claiming it was for the protection of women. Native men could take interest in the position of women as a means of protecting traditional identities, or as an act of subversion to British rule. This essay seeks to outline how and why early colonial reformers chose to use women in their discourse, and to show the huge impact this made both in the colonies and back home.

From a British perspective the treatment of women by native societies was coming to be thought of as directly correlating to how advanced and ‘civilised’ that society was. Thus the people of Tahiti, who treated their women folk well, were believed to be superior to the Maori who excessively burdened their women, and committed violence against them. By the time of early British rule in India then it was established that women, or at least higher class women, ought to be protected from the rigours and harsher side of life, as were their European counterparts.

A good example of how this worked in practice can be seen in British reactions to the practice of ‘sati’, a Hindu custom whereby the widow of a recently deceased man would volunteer to immolate herself on her husband’s funeral pyre. Modern research has shown that before colonial rule it was a relatively rare practice, and confined to the higher castes. However contemporary British journals such as the ‘Missionary Register’ framed their accounts in such a way as could not fail to incite horror. In the early years of the nineteenth century the Missionary Register stated: ‘Let every Christian woman who reads the following statement, pity the wretched thousands of her sex who are sacrificed in India to a cruel superstition…’ Eyewitness accounts from respectable Europeans, such as Francois Bernier served only to highlight the gruesome nature of sati.

“It is true, however, that I have known some of these unhappy widows shrink at the sight of the piled wood; so as to leave no doubt on my mind that they would willingly have recanted, if recantation had been permitted by the merciless Brahmens; but those demons excite or astound the affrighted victims, and even thrust them into the fire.” – Francois Bernier, 1667. 

The implication here is that India is being held back from modernization by clinging to these outdated ‘superstitions’. Sati provided the British with another reason for insisting that religious law be codified. In principal the British acceptance of native law seemed a sign of fair practice. In reality however such formalization of laws which had once been flexible made things more difficult for native peoples in India seeking redress. On the issue of sati British officials used less known scripture framed by the likes of Ram Mohan Roy to try and curtail its use; for example in 1813 sati’s voluntary status was cited as justification by the British for outlawing it for girls under the age of 16, or for women who were pregnant or intoxicated, as they were not deemed capable of making such a decision. Here we see how a way of rule, seeming to agree with custom and tradition, but in fact making major changes under its banner, is tried out and refined using the issue of women as a cover. 

The status of women was thus an important issue for native men too. Sati was something that had its fate discussed and, ultimately, decided by men. There is evidence that as the British began to restrict its use, some widows came under increasing family pressure to go through with it, a defiant act of traditionalism in the face of British ‘modernity’. The number of recorded instances of sati certainly rose; in 1815 there were 378 cases, by 1818 this had risen to 839. Until it was finally outlawed in British India in 1829 it could serve as a constant source of antagonism, made worse only by its close association with tradition and religion. Women were rarely involved in the discussion, unless their words were recounted by European men, often detailing how they seemed to have been drugged, or were simply too fearful not to go through with it. Else European women might be encouraged to campaign on behalf of their colonial cousins, as Ward’s ‘Farewell Letters’ put it: ‘these females doomed to a horrible death’. Primarily however women were excluded; this was a discourse about them, not one for them to participate in.

If sati provided a platform for native and European men to disagree, popular culture provided one on which they had common ground. Sumanta Banerjee describes how before colonial rule Bengali women found entertainment in songs, such as agamani and vijaya songs sung in celebration of the goddess Durga Puja. Other categories of songs such as the kheur which were lively and lighthearted, often somewhat risqué in nature. As the nineteenth century got underway the rising Bengali elite, the bhadralok, began to take exception to these forms of entertainment. Increasingly members of the bhadralok received a British style education, and modeled themselves on British elites, wearing western style clothing for example. In this they began to form self identity along the same principles of hard work and moral purity of the British middle classes. As a result of this more emphasis was placed on women’s moral worth, and opposition to such popular entertainment grew. Here we see British cultural values appropriated by the educated sectors of the Bengali population, and applied against what they viewed as their own ‘low’ culture.

This is an example of a shifting awareness of the definition of status. Before colonial intervention higher caste women were kept separate from men. For example Hindu women in a zanana would cover their faces when a man entered the room, and would not go outside unchaperoned. However the arrival of the Europeans sees an increasing emphasis on internalized behaviour, as well as its outer manifestation as a marker of status. Women’s clothing for instance became an important issue. The wearing of a sari represented a continuation of traditional practice, and it was the norm for wives to continue wearing it even when their husbands adopted the western mode of dress. Yet the introduction of more substantial undergarments could be said to reflect the concern with morality. 

Women’s status could attract the attention of early colonial reformers in other ways. Protestant missionaries for example focused on the problems of nudity amongst lower caste converts to Christianity. Under customary law women from these castes could not cover their upper bodies. However upon conversion the idea of demure clothing spread quickly and women rebelled against what was expected of them. In Travancore in the 1820s Christianized Nadar women were beaten and stripped in the streets for wearing the Nair breast cloth. Under pressure local rulers, such as maharaja of Madras, made concession and allowed Nadar women to wear the kuppayam, the traditional tight fitting jacket worn by Syrian Christians. Whilst many Nadar women continued to flout such rulings, and wear the Nair cloth regardless, it does show how European men could use women as a vehicle to prove their own influence in early colonial India. Schools were set up to teach Nadar girls how to make European style lace, the income from which allowed them to buy their freedom from their landlords. Outside religion, under the control of men, could thus subvert the traditional power structure.

Women themselves however rarely gained from early colonial intervention. Nirmala Banerjee in her essay on working women in colonial Bengal describes how the female workforce was made worse off. British legislation on the cloth industry for example saw a huge decline in the number of spinners. In 1812-13 there were 330,000 spinners in the Patna and Gaya districts. By the time of the 1881 census there were only 200,000 spinners in the whole of Bengal. Artisans were finding themselves discriminated against, such as the singers who were now falling foul of the bhadralok expectations of women, Women workers were pushed into the agricultural sector, or out of work altogether. This again may reflect the new acceptance of British moral culture amongst Bengali men, suggesting that women’s status was a way in which both groups of men could find common ground over.

The status of British women was also significant for early colonial reformers. Upper and middle class women arrived in India had to be protected from the natives, for example the importance of wearing full dress, including flannel undergarments, was stressed. If women did not presumably they were open to moving backwards in terms of civilized behaviour. They were used as an example to the increasingly westernized section of Indian men of the way their own women should behave. Also their safety and protection formed a means of justifying harsh and restrictive sanctions against the Indian population, in the aftermath of the Indian mutiny for example.

Sometimes the status of women could act as justification for a lack of reform. European men for example might describe in their accounts the health and vitality of women’s bodies, rather than highlighting the fact it is their lack of clothing that allows them to observe this. Cohn points out that European’s often cited the difference in skin colour as a reason as to why it was less shocking to see a half naked Indian than it would be to see a similarly attired fellow European. In this context Indian women are framed as ‘exotic’ and perhaps excused from the reforming zeal of the Europeans around them.

In conclusion the status of women was very significant for early colonial reformers. By flagging up instances of oppression of women they found justification for enforcing their own rule. The reverse is also true; instances where native women were treated well provided justification for allowing the continuation of some traditional and religious practices. Women’s status provided ways in which the British could show their authority; the ultimate outlawing of sati for instance. At the same time it could give British men and their Indian counterparts common ground; the education of Bengali women was something strove for by missionaries and the Bhadralok alike. Overall the status of women was an issue which gave early colonial reformers a platform from which they could prove the need for, and the success of their reforms, thus allowing them to widen the scope of their activities. 


  • Ghosh, Durba. "Gender and Colonialism: Expansion or Marginalization?" The Historical Journal 47, no. 3 (2004): 737-55. 
  • Sangari, Kumkum, and Sudesh Vaid. Recasting Women: Essays in Indian Colonial History. New Brunswick, N.J.: Rutgers University Press, 1990. 
  • Documents on ‘sati’ at the following website: 
  • Cohn, Bernard S. Colonialism and Its Forms of Knowledge: The British in India, Princeton Studies in Culture/Power/History. Princeton, N.J.: Princeton University Press, 1996. 
  • Mani, Lata. Contentious Traditions: The Debate on Sati in Colonial India. Berkeley: University of California Press, 1998. 

Supervision Comments - Overall, an engaging exploration of the topic that touches on all of the major themes. The conclusion is particularly clear and strong. You might perhaps focus a bit on writing style: some of your important points are lost in slightly confusing syntax. For revision, read the introduction to Malavika Kasturi's book on Rajput lineages, Embattled Identities, which deals with another key colonial intervention around gender, female infanticide. Comparing the anti-infanticide campaigns to those against sati suggests that sati, as a less-followed practice than infanticide presented a soft target to early reform efforts. You should definitely include more about the Utilitarians and Liberal reformers and the battle between Orientalists and Anglicists over how British India should be ruled.

Sunday, 24 August 2014

Mughal Decline

Part II Paper 26, Week 1. I was supervised by Leigh Denault at Churchill.

Was Mughal decline inevitable? Discuss the view that Indian society and economy was ‘divided but buoyant’ in the eighteenth century.

Traditionally the great Mughal Empire which, at its height, covered over four million square kilometres of land was thought to have fallen into irreversible decline in the eighteenth century. Early histories, such as that by Sir Jadunath Sarkar, pointed to the breakdown of centralised administration, claiming the result was a descent into political chaos and confusion. This in turn was said to have created economic and social problems as individuals vied for power. In more recent years however this view has been strongly challenged. The increasing focus on regional studies has led to a reassessment of the buoyancy of the pre-colonial economy, and stability of social structures. Commercialisation is no longer seen as an innovation of the Europeans, with evidence of a move towards private ownership in many areas. Primarily what has been proven is that there is no easy summary that can be applied to the whole of the India; the eighteenth century experience was far from uniform.

At the very beginning of the eighteenth century the Mughal Empire was widely understood to be as strong – if not stronger – than it had ever been. Huge swathes of land were under Mughal control covering most of present day India, Pakistan and Afghanistan. Early signs of change became evident however following the death of Aurangzeb, the sixth Mughal Emperor, in 1707. Bahadur Shah, Aurangzeb’s successor, not only was forced to fight off rival claims to the throne from his brothers Azum Shah and Muhammad Kam Baksh, he also inherited the task of trying to stamp out elite factionalism and overcoming hostility from the provinces over his father’s strict enforcement of Sharia law. Upon Bahadur Shah’s death in 1712 another bitter succession struggle broke out between his sons. The short and unpopular reign of Jahandar Shah was brought to an end in battle with Farrukhisyar in 1713, who then began his own reign, dominated by his advisers such as the self serving Syed brothers. This tumultuous overview provides insight into the sudden instability within the official administration. Without strong leadership from the centre it is unsurprising that local figures of authority should try and consolidate their own power.

This certainly seemed to be the case in some areas of the empire. Om Prakash highlights the example of Murshid Quli Khan in Bengal. Maintaining the level of revenue collection expected by the central Mughal administration, Khan proved himself capable and was made Subedar in 1716, giving him and Bengal effective autonomy . This kind of arrangement, whilst a sign of weakening central control, could be said to support the idea that Indian society and economy was ‘divided but buoyant’. Clearly, sufficient revenue was still being raised in Bengal and the official recognition of Khan from the centre suggests that this division was not overly problematic.

Perhaps this is a misleadingly optimistic view however. In other parts of India the situation was not so favourable. Muzaffar Alam describes how in northern India the Mughal forces struggled to defend territory from local competitors. In 1708 for example the governor of Awadh resigned, citing his insufficient authority to deal with the threat from ‘recalcitrant’ zamindars (officials employed to collect taxes from peasants) as a significant part of his reasoning. In 1709 Daruban Singh and his clansmen invaded and held Ghazipur, defeating the Mughal forces sent to remove him. This hints at the extent of the threat such individuals could cause. They often had charge of large private armies and cavalries; the Gaur Rajput rebels of Sarkar Khairbad had control over no less than 25 fortresses for example. Aside from being unable to defend territory from rebels Mughal forces were also faced with fighting between rival factions. In 1715 for example the zamindars of Samanpur and Pargana Bhagwant fought and killed the zamindar of the ri’aya of Bahramganj. In such conditions it is difficult to believe social and economic ‘buoyancy’ could have been maintained.

Traditional accounts of this period of Indian history have assumed that the impact of the late seventeenth-century economic crisis was hugely damaging. Faced with internal succession disputes and unfeasibly high taxation demands from the new local elites, it was believed that the Indian economy was failing and therefore in desperate need of European commercialism, such as that imposed by the British from the late eighteenth century. Work by the likes of Alam and Bayly has gone a long way to disproving this assumption. The scope of the so called ‘crisis’ is now questioned, and its affects are thought to have been, in many instances, negligible. In the early eighteenth century, far from being a stagnating economy, India actually possessed upwards of a quarter of the world’s total manufacturing capacity and was a major player in the world economy. Accountancy skills, formerly thought to be introduced to India by Europeans in this period, had in fact been flourishing since the sixteenth century, particularly in western India. In more social terms there is increasing evidence that the caste system was not as restrictive as has previously been believed, and that there was in fact a relatively high level of worker mobility.

The ‘modernity’ of the economy of eighteenth century India can be seen in the high level of market dependence, even amongst the peasantry. Trade was an incredibly important part of the economy, and not just with the Europeans. Large amounts of trade were also done with other parts of Asia and, of course, internally for the domestic market. Washbrook quotes for example that in a single salt season between 70,000 and 120,000 bullocks laden with goods were expected to visit each of the eight major salt trading centres along the southeast coast of India. Prakash and Chaudry maintain that economy of Bengal was prosperous throughout the first half of the eighteenth century and that, in fact, it is only in light of increasing interference from the European trading companies, the British East India Trading Company in particular, that real problems emerged. Sivakumar and Sivakumar argue for instance that real earnings from agricultural work in Chingleput were three times higher in 1795 than 1796 after the land settlement.

A case study demonstrating this model of the Indian economy can be seen in Prasannan Parthasarathi’s work on the cloth industry in South India. In the early eighteenth century cloth was already a massive business interest; in Awadh trade with the Europeans actually only represented a very small percentage of the entire cloth trade. Weavers held a lot of power within the workforce, dictating whom they worked for, for example. This situation rested on custom, the ability to negate contracts for work by weavers giving back the advance payment for instance, but also on newer trade developments. Merchants competed in the marketplace to sell cloth to the Europeans at a profit. Oppression of the weavers then was not something that had always been there as was once believed. Instead it was a situation which was created by ever tighter controls on manufacture from the British. The policy of direct advances – cutting out the merchant middlemen – from the 1760s was unpopular because up to 2/3 of the advance was paid in yarn. In addition the clause whereby a contract could be annulled by repaying this advance was soon eroded. In this way the weavers, a group of independent workers, were reduced to dependency on the Company because of its growing monopoly. This suggests that, were it not for outside forces, the Indian economy could have indeed been ‘divided but buoyant’.

The main obstacle to this interpretation comes, as does its main support, from regional studies. Contemporary Dean Mahomet devotes page after page of his travel journal to describing the differences between the regions he passes through on his journey. The Mughal Empire stretched over millions of square kilometres and, so, it stands to reason that the regional experience was not uniform. Some areas were prosperous, revenue collection in Bengal rose from around £2million in 1765-6 to £3.33million in 1770-1, the fact it was paid in full suggesting its inhabitants were well off enough to do so, even after a transfer of authority away from the central Mughal courts. Other areas struggled, for example those with unscrupulous individuals in charge and a heavy reliance on the jajmani system (whereby families of different castes were expected to perform certain services for each other.) In some areas European intervention might be welcomed, in others it would be vehemently resisted. The point is that it is hard to generalise when considering such a massive expanse of land, populated by such a wide variety of peoples.

This might seem to suggest that Mughal decline was inevitable in that the Empire had been too centralised, too determined to enforce top-down policies such as Sharia law, even in areas where it would inevitably be resented such as the Sikh majority Punjab. This has long been seen as the secret of British success in obtaining power in India. By working with local elites and seeming to be sympathetic to regional conditions, the British could gain support from the ground roots. Stein points to the practice of ‘military fiscalism’ that grew in the eighteenth century. Standing armies were maintained by the Company to protect their interests which, at the same time, provided employment in the army, or in providing supplies for it. Although, again, the success of this policy was often linked with local circumstances.

For a long time study into this area has been swamped by expectations formed from the big overarching theories of history. Washbrook suggests that because India’s story had not terminated in the creation of a modern industrial society, historians were disinclined to look at the eighteenth century Indian economy through a capitalist or commercial lens. Such developments were assumed to be European impositions, although we now have evidence for the development of an industrialist class structure before the Raj for example. Some regions such as Mysore were already controlling production in the early eighteenth century, and monopolies on goods controlled by local elites were actually fairly common.

In societal terms again there is no coherent general overview. Many areas had maintained their own character throughout the period of Mughal rule. The jajmani system had never been widespread in Bengal for example, and was close to dying out in Maharashtra and Gujerat by the eighteenth century. In many areas there was continuity, for example by the 1770s artisans in Banaras were being supported by the patronage of the new great merchant families, taking over the role traditionally held by the Mughal nobility. In this way Mughal culture and societal expectations were being preserved.

In conclusion Mughal decline was not necessarily inevitable. Had there been more competent successors to follow Aurangzeb perhaps the decentralisation of power could have been curbed. Even if it had not, greater co-operation between the rising local elites and the central power could have helped to maintain Mughal cultural dominance at the very least. What was most likely inevitable was continued regional diversity, which is what we see throughout the eighteenth century. This supports the suggestion that the Indian economy and Indian society were, to a certain extent, always ‘divided’. Whether or not it was ‘buoyant’ is a more difficult issue. In some areas economic prosperity continued, such as the south Indian cloth industry, throughout much of the eighteenth century. Yet other areas struggled; for example the older port towns which were losing out to the new European centres of trade. Above all the question should be considered in a regional context, rather than in the terms of overarching historiographical theories which have framed the study of Indian history for so long.


  • Alavi, Seema. The Eighteenth Century in India, Oxford in India Readings. Debates in Indian History and Society. New Delhi: Oxford University Press, 2002. 
  • Barrow, Ian J., and Douglas E Haynes. “The Colonial Transition: South Asia, 1780-1840.” Modern Asian Studies 38, no. 3 (2004): 469-78. 
  • Fisher, Michael H. “Review: British and Indian Interactions before the British Raj in India, 1730s-1857.” The Journal of British Studies 36, no. 3 (1997): 363-70. 
  • Parthasarathi, Prasannan. The Transition to a Colonial Economy: Weavers, Merchants and Kings in South India, 1720-1800. Cambridge: Cambridge University Press, 2001. 
  • Travers, Robert. “The Eighteenth Century in Indian Hisory.” Eighteenth-Century Studies 40, no. 3 (2007): 492-508. 
  • Washbrook, D. A. “Progress and Problems: South Asian Economic and Social History C.1720-1860.” Modern Asian Studies 22, no. 1 (1988): 57-96. 
  • Maohomet, Dean. The Travels of Dean Mahomet: An Eighteenth-Century Journey through India. Edited by Michael H. Fisher. Berkeley: University of California Press, 1997. 
  • Jodhaa Akbar. Dir. Ashutosh Gowariker, 2008.

Saturday, 23 August 2014

Indian Nationalism

A/N: Pt II Paper 26. Essay Topic 15: Imagining India: Indian Nationalist Thought. Week 4 essay, I was supervised at Churchill by Leigh Denault. 

Was Indian nationalist thought ‘a derivative discourse’?

In the latter part of the nineteenth century Indian nationalism began to emerge as a concept. Attempts were made by intellectuals to create a coherent history of India, a history that could unite its people and encourage a unified identity. This spilled over into administration, and politics began to take on an overtly nationalist flavour. In 1986 Partha Chatterjee declared that Indian nationalist thought was, in fact, a ‘derivative discourse’, implying that it was simply modelled on British nationalist thought. This essay argues that whilst aspects of British political movements were taken on board by Indian nationalists, it is overly simplistic and dismissive to claim that Indian nationalist thought was ‘a derivative discourse’.

Indian nationalist thought certainly looked to British examples to establish itself. Nandy for example describes how, in the early stages in particular, Indian writers sought to frame traditional Indian stories in a way which would make them be taken seriously by the British. Bankimchandra Chatterjee for instance reinterpreted the story of Krsna, portraying him as ‘a normal non-pagan god who would not humiliate his devotees in front of the progressive westerners’. Again in the work of Michael Madhusdan Dutt there is a reframing of Indian mythology to better fit a western world view. Nandy points out his retelling of the Bengali epic Ramayana as being especially westernised. Rather than three genders being apparent in these new stories, purusatua (masculinity), naritva (femininity) and klibatva (hermaphroditism), there is a clear male / female divide. Evidence such as this would seem to add credence to the idea that Indian nationalist thought at this stage was simply ‘a derivative discourse’.

There were attempts to codify Hinduism into a coherent national religion, rather than a disparate network of beliefs bound by common rituals. This has been seen by some historians as an example of the ‘christianization’ of Hinduism. Perhaps however this is a disservice to what reformers were hoping to achieve by their actions. Making religion more easily accessible and understandable may have been restrictive on the one hand; the attempt made by the British to catalogue scriptural doctrine and apply it as law, as with the case of sati, provides an example here. But, on the other, it also provided a rallying point for Indian nationalism. Self-identification is an important part of nationalism, and by installing a more coherent structure of belief people could feel they had more in common with each other. This suggests that whilst the idea may have come from Christianity and the west, it does not mean that these early reformers were merely attempting to carve out a religion in the same mould as Christianity.

This use of western methods for Indian ideals can again be seen in the work of Ishwar Chandra Vidyasagar. Throughout the 1860s for example he worked on simplifying the Bengali alphabet, thus making it easier to put into type. He embraced modern – western – science and translated the biographies of eminent scientists such as Newton and Herschel into Bengali. At times he even seemed to go against the principles of Hinduism by insisting that all Indians, regardless of caste, should receive a good education. However, by taking on board western knowledge and principles Vidyasagar sought not to ‘westernise’ India as we might understand it today. Rather he sought to create a sense among the Indian people of belonging to India, of being Indian. Vidyasagar supported the Widow Remarriage Act of 1856, something which was believed to go against the fundamental principles of Hinduism. This is not simply an example of agreeing with British thought on the subject. It is instead an example of how British thought could be appropriated; Vidyasagar believed the act would provide a better quality of life for Hindu women, and thus create a greater sense of pride for and self-identification with the faith.

The form which the Indian Independence Movement was to take might seem to support the idea that Indian nationalist was ‘a derivative discourse’. The National Congress, a political party established in 1885, seems to have much in common with western movements. British domestic protest was usually political in nature by this time, the Chartism movement for example had attempted to unite the middle and working classes around a political rallying point. The Indian National Congress grew throughout the final years of the nineteenth century to eventually become the leading voice in the struggle for freedom from British rule. By the early twentieth century people were looking to the party for leadership.

However the National Congress also provides an example of the way in which Indian nationalist thought was a divided discourse, regardless of whether or not it was derivative. The party was split over attitudes towards the British in 1907 for instance, into the Garam Dal and Naram Dal factions. The Khilafat movement, for protection of the Ottoman Empire, again split the party, resulting in a number of key figures leaving to establish the Swaraj party. This shows that even in a discourse that took its cue from western models, there was still great variation. Religious divides between Hinduism and Muslim beliefs were to become increasingly important.

This can be seen clearly by looking at the great central figures of Indian nationalism like Gandhi and Nehru. Nehru wanted to unify the Indian people behind a discourse of modernity, whereas Gandhi sought to emphasise the past. At first it seems as though this is a simple case of Nehru being more ‘westernised’ in his outlook than Gandhi. Indeed in many ways Gandhi seems very traditional, for example his refusal to accept the Smritis as set in stone. This reflects the reality of older Hinduism, which was relatively fluid. This meant that he could stand against the caste system with some justification, arguing that it was ‘contrary to universal truths and morals’. However Gandhi had been educated in Britain, and was close friends with C.F. Andrews, an English Christian cleric. Gandhi knew how to work within the British system of administration, for example making public appearances and framing his arguments in terms which were easily understood by foreign nationalists.

The idea of peaceful protest was not something unique to India, nor unique to Hinduism. Romila Thapar shows how ‘historic’ Hinduism was a construct of Indian nationalists of this period, seeking to present a structured vision of the past which suited their present circumstances. Saivite persecution of the Sramanic sects is pointed to as proof that Hinduism was not the strictly peaceful religion it later came to be portrayed as. What we see then is an appropriation of the western practice of writing history to fit with modern preconceptions for nationalist purposes. This means that whilst nationalist thought apparent in such work is in one sense derivative, it is also separate in that it provides a uniquely ‘Indian’ history for nationalism to grow around.

Thapar argues that a Hindu community was an essential requirement for mobilisation of nationalist sentiment. There needed to be some common cause for people to rally around. However this could be as divisive as it was cohesive. Focus on Hinduism alienated members of other faiths, particularly evident in the schisms in the National Congress with Islam. Later this would again be witnessed over language for example. Agreement that English should be ousted as the official language was broadly reached, yet in favour of what remained a problematic question. There was no ‘national’ language that could easily replace it, the ‘majority’ languages such as Hindustani were still only spoken in relatively small areas of India. Here we see that not all nationalist discourse was derivative. Language had never been a major focus for British nationalism for instance, even in areas where other languages were spoken such as Wales and parts of Scotland. So a derivative system – the political party – is employed to solve Indian issues.

That derivative discourse should be a part of the story of Indian nationalist thought is perhaps unremarkable. Many of the intellectuals who were to take up the cause had been educated with western methods, or even in Britain itself. Aurobindo Ghose for example was sent to Britain at a young age to be educated, only returning to India over twenty years later, having made his way through the British education system, including a stint at King’s College, Cambridge. Even when he turned against Britain, joining the fight for Indian independence and, later, concentrating on a more spiritual existence the impact of this British upbringing would have remained with him. For Aurobindo and many of his contemporaries, using a western discourse to promote nationalist thought would have come naturally. This was a result of the importance placed on Western education by the Indian middle classes, such as the Bengali Bhadralok, and the permutation of western culture into Indian life.

Shruti Kapila argues however that just because a western model is used, it does not necessarily mean that it remains foreign. Instead the model is transformed as it is applied to its new problem. Chatterjee highlights this in his discussion of Kedourie’s work. Kedourie claims that ‘Nationalism as an ideology is irrational, narrow, hateful and destructive. It is not an authentic product of any of the non-European civilizations…’ Chatterjee suggests that this isn’t the case. Just because the form nationalism has taken has at times been destructive in Europe, for example in Nazi Germany, it does not mean that all instances of nationalist discourse should be looked upon equally as negatively. In India European ideas of nationalism were utilised, but then changed by the leaders of the Independence Movement. Divides within Indian nationalist thought serve both to reflect its unique Indian origins. For example the splits over religion and language. But also to highlight the fact that this was a model that had been applied elsewhere. Political activism regularly resulted in schisms over policy, as in the contemporary women’s suffrage movement in Britain for example, or the early Chartist movement.

In conclusion Indian nationalist thought was a derivative discourse. It looked to western models of framing nationalist sentiment, and sought to recreate them in India. For example the publication of reworked histories and myths. However the term ‘derivative discourse’ seems overly dismissive. The tools utilized by Indian nationalists were the obvious ones for them to choose. Many had been educated within a westernised system and, in any case, the effectiveness of political parties and ‘propaganda’ literature had already been proven successful. They did not just copy from what had gone before, instead they altered it to fit their own purposes. It also ignores the divides within the nationalist movement, for example between Hindus and Muslims. It was derivative, but still complex.

  • Partha Chatterjee, Nationalist Thought and the Colonial World: A Derivative Discourse, Delhi, 1986 
  • Romila Thapar ‘Imagined Religious Communities? Ancient History and the Modern Search for a Hindu Identity’ Modern Asian Studies, 1989. [JStor] 
  • Sunil Khilnani, The Idea of India, London, 1998 
  • Ashis Nandy, The Intimate Enemy: Loss and Recovery of Self under Colonialism, Delhi, 1983 
  • Shruti Kapila, ed. Special Issue, ”An Intellectual History for India” Modern Intellectual History April 2007. [See introductory essay.]

newerPageTitle olderPageTitle Home