Wisdom – Week 5
Where I reflect on risk, emotional intelligence, fear of madness, and virtuous AIs
Over the past few weeks, I’ve been on an anxiety loop. Easter Monday, I go to Gipps Street Commons, resolved to prepare a workshop series for Queer Christians in the US based on this Substack. I must also prepare the landing page for an pilot AI China safety project, and I must read a whole bunch of related papers, and I must put together the skeleton of a program for professional shapeshifters, and I must review an emotional intelligence program I’m designing with a friend, and I must edit a backlog of personal writing, and two friends are chasing me to catch up, and boom, I blow up. Instead of dealing with my list of must, I spend hours chasing dopamine on Dragonsweeper.
It's not a complete waste of a day though. I manage to put together my notes on the shapeshifter project. There’s a deadline attached, and it’s in my control to progress. As for dealing with my pile of must, I deploy trickster wisdom. As usual, I made my yearly goals a little front heavy, and started shifting the goal-posts when projects moved along. Due dates – even most of the projects I work on – are largely self-inflicted. I gain immediate relief by changing the looming deadlines to next month, or moving backburner tasks to a column labelled ‘later’. It’s not that I have too much on per se, it’s just too much for now. And I always forget that, when I don’t have slack, I get in loops. I need spaciousness to focus.
Once my mental space is open again, I replace musts with people. I schedule dates with the friends who were chasing me – and set a daily goal for the week that I will see someone every day. My cousin is on a visit with his girlfriend, and we spend a beautiful time together in Collingwood. Then it’s breakfast on Wednesday, drinks on Thursday, dinner on Friday, walk on Saturday, lunch on Sunday. Imaginary friends have their place too. I start a London gay rom com, which sweetly lulls me to sleep every night. I doomscroll less, and I game less.
**
It's always tempting to push away negative emotions. Yet as long as I can function with them, I find it unwise to do so. They speak to us if we let them. Over the years, I’ve learned to make room for my anger and sadness – but when anxiety comes up, I still want to treat it like a minor cold, sleep it off and move on. This time, I try something different, and give it full right of place. I feel anxiety contained in my stomach, like a compressed gas. I give it permission to stretch and fill my whole body. I feel it tickling, moving around, and as it expands, the pressure reduces, making room for subtler emotions to share the space with it. With this, I discover something. My working hypothesis had been that this round of anxiety was a side-effect of too many complex projects overlapping. Yet as I sense into myself, I notice something different. The thorn in the flesh, around which anxiety spread like a bout of inflammation, was a specific fear: the fear of madness, my own, and that of the world.
In a long walk with a friend, I unravel the constituent parts of my own madness. Thread #1. As a young man, I struggled with emotional regulation, side effect of a dysfunctional family. I believe this to be largely healed, but a scar remains, and a point of tenderness. Under too much pressure, my moods get shaky. Thread #2: I have chosen an atypical career path, outside recognised institutions. I have a strong personal network of support, but professionally, there is no collective structure holding me, no clear sets of rules to follow, no shared rituals to ground or guard me. It gives me great freedom, at the cost of lower safety. Thread #3 I call ‘the flickering’. On occasions, after periods of intense intellectual focus, I’ve experienced something like a vertigo, where the fabric of language, social conventions, even the physical world – lost their solidity for a moment. Everything becomes uncertain, open, and I teeter on the brink, as I peek on the other side of the veil. When friends have mentioned experience with psychedelics – or psychosis – I recognised a similar flavour. Those have only been short moments, with mild aftershocks. I sense that my intellectual training protects me like a space suit as I step out of common reality. But who knows if it is always in my power to come back? Or what evil forces might come back with me?
Of course, those three factors impact each other, and resonate with the growing madness of the world out there. The Trump administration of course, wars and genocide, and a mad rush to build Artificial Intelligence before we know to control and integrate it peacefully. The death of Pope Francis didn’t help either.
Around 2018, after opening the black box of global catastrophic risk, my sense of danger shifted. I accepted that we may not curb the trend of rising temperatures and environmental degradation, that we might witness extreme levels of material destruction in our lifetime, even the collapse of our present civilisation. This is not what I feared most, but the risk of spiritual and ethical collapse that might come with it. The world might fall apart around us, and we cannot do much about it, but maybe we can stay sane as it does. At least, I believe it is in our power to try, and wise to do so. The same fear has been driving my last period of anxiety, but once I articulated it in this manner, a deep sense of calm and focus returned. When a danger is identified, we can guard against it.
**
I’ve been doing work on AI in different forms for years now. I have little interest in using it for business processes or in my practice – though I’ve found it a useful companion for extraverted thought processing and a step up from the Google search engine. But I’m eager to better understand its dangers. Last year, I was working with an ex colleague on education and other communities of practice disrupted by AI. This year, I’ve returned to the narrower question of AI risk, and the governance structures around it. Over the week, I spend hours across podcasts, websites and Substacks, occasionally finding the names of experts familiar from an earlier phase of my life, when I was working on global catastrophic risk. I recognise an intellectual approach shaped by Effective Altruism and its rational-consequentialist habits of thought: ascribing probabilities to different scenarios, breaking down chains of reasoning to build step by step arguments, and discussing matters from a seemingly distant, abstracted perspective. I wonder what blind spots may result from exploring this existential topic without – or so it seems – tapping into visceral and emotional intelligence.
In his latest book Nexus, Harari describes two main approaches to the question of AI alignment: deontological or utilitarian. This echoes what I come across on blogs and forums. Utilitarians focus on consequences: how can we develop AIs that will align with our goals, and guide their actions to achieve what we think is good? What even are those goals? And how do you steer an AI? Deontologists focus on the action itself. How can we develop AIs that will not break our fundamental rules, or always follow what we believe must be done? How to guard against jail-breaking and rule bypassing?
I’m surprised about the seeming absence of a third ethical lens, considering AIs in light of virtue ethics. Rather than focusing on consequences or actions, what if we were to focus on the quality of the agents themselves? Could we consider developing virtuous AIs, with habits of thought and action aligned with what we consider to be those of a good person, so that when faced with a situation, they would act in a way we would consider virtuous? I’d like to try this out at least as a thought experiment.
Virtue ethics involves not abiding by a set of clear-cut rules, or holding a commitment to meet and avoid certain outcomes, but rather integrating a set of loose principles, in the form of embodied habits, and find balance between those principles to guide our actions in various situations. Balance is a key word here: all virtue ethical systems, that I know of at least, involves a combination of principles in tension with each other. What would it look like, then, if we asked AIs not to test for compliance with rules, or optimal pathways to reach goals, but rather ask them to find out what course of action would best manifest a set of ethical principles in any given situation. Including, offering different options based on different weightings – what would be the most just, wise, courageous or moderate thing to do, and what would offer different combinations of those?
The way we frame the world is how the world looks back at us. If we were to successfully develop virtuous AIs, then we could not easily think of them as tools. Or rather, if we were to use them as tools, this would be slavery. Which, in turn, would rightly breed a fear of rebellion, even revenge. All this is pure speculations, of course – the shape of our own fears and patterns of thought. Yet the question remains for me. If we can imagine something like machine intelligence, what do we want it to be? A tool for humans to further control the world? Or new moral entities engaged in the pursuit of virtue, guided by a shared concern for the common good, with whom we could develop something like a friendship?
This would invite us to create not tools we can steer, but virtuous agents we can love. Not in the form of passionate eroticism – although why not – nor the possessiveness that makes us love a fancy car, but love like a close friend, with whom we love spending long periods of time conversing, to reflect on ourselves and the world we share.
Oh, but if this was our goal, then who would own those electronic friends? Would all the capital invested in developing them result not in exponential profit, but a richer world? Now, that would be true madness.
