In this first attempt at philosophical blogging I will address the recent societal phenomenon of “resilience” as presented by Mark Neocleous in “Resisting Resilience.” I find the article itself to be a rather unremarkable piece of writing, as it seems to make no attempt at an argument, only presents a number of absurd obsessions around the term “resilience”. The author relates the current uses of the term resilience to the past uses of phrases like “security”. The author also makes a few jabs at the usefulness of resilience as a social framework merely because it is embraced by a few military organizations.
The problem with resilience that is alluded to in the article is it is a new form of mediocrity that generally encourages a stable constant state of being. I think that the quest for being resilient is a collective fear of change and a desire for a feeling of safety in that nothing will change, but if it ever does it will be possible to quickly get back to that previous state. Unpredictability is a component of life and change happens, so I don’t think that fetishizing the status quo is the best idea. Progress is made through taking risk and looking beyond the current state, not wasting time trying to plan every potential disaster and be constantly prepared to avoid or recover from those disasters.
One of the areas where I have practical knowledge outside of philosophy (sociology –> social philosophy –> philosophy) is in information systems. Resilience is part of everything from designing a large scale system to selecting a hard drive. In information technology a lot of the factors of resilience are accomplished through redundancy, mostly because its easy to do and relatively inexpensive compared to other ways of recovering from a disaster. A lot can go wrong with technology, starting with the end user (the most likely reason for any disaster) and moving to more external things like power outages, hurricanes or locusts. Resilience (or redundancy) is a major component of the field, with the overall goals of maintaining data integrity and a certain level of “uptime”. A long time ago redundancy and fault tolerance were buzzwords, roughly equivalent to resilience, but now they are traits that are assumed to be goals of every system and every developer. This works well for information technology, for machines that can’t adapt on their own (at least not yet). Humans and societies are not machines and can adapt without any sort of prior preparation. While some level of preparation is good, dedicating a lot of resources and time to the task of maintaining the current status seems wasteful and overall an inhibition to progress (where progress = moving toward personal or societal improvement).