I recently read The Mountain in the Sea, by Ray Nayler, and it has left me thinking in the way only a good book can. This is a short post to discuss and process my thoughts.
Plot Synopsis (avoiding major spoilers)
Dr. Ha Nguyen is an expert on octopuses and their potential to develop a society. She is hired to secretly work on an island where this may have happened: there are signs of an intelligent, conscious species and society. She works with Everim, an android who is the first (and potentially last) true android. Atlantaseg rounds out the team as the security officer, but she is much more than just hired muscle.
Over the course of 450 pages we see the team discovering more about the society of octopuses, grappling with the alien-ness of their intelligence, and in that process also reconciling the unknowability of another mind: human or not. There’s corporate espionage, a fascinating story of a hacker trying to map artificial minds, the enigmatic leader of the company who seems almost mythical at first but is ultimately just another human, an automated fishing vessel that uses human captives to pillage the barren seas, and a purposeful blurring of lines between human/octopus/artificial intelligence. At the core is a message about how we, as humans, as a society, do harm through indifference.
Writing
One aspect I love about this book is that it treats you as intelligent. There is a lot of depth, and despite the length of the book, there are no wasted words. There are several significant events that happen in the span of a sentence or two, so if you’re just skimming, you can easily miss important details. In fact, far from skimming, I found myself frequently pausing to consider ideas that were brought up.
Consciousness: the line between machine and being
In some ways, the novel is a treatise on consciousness in various forms. What is consciousness? I find that a very human question to ask, and I don’t really trust anyone who claims they have the answer. Nayler poses this question to readers in many forms throughout this book. We are presented with robotic minds that span from rudimentary software to beings that seemingly transcend humanity: with so many varieties in between. Are these truly conscious, thinking machines? If not, what makes them different from us? If so, where do we draw the line between machine and conscious being?
On the flip side, we are confronted by the vast scale of human un-thinking, a term I just coined for the personal and societal indifference and/or willful ignoring of events, conditions, ideas, etc. The world of The Mountain in the Sea is bleak and unforgiving. The seas have been over-fished by automated vessels, and no one seems to care. Well, they care in a hypothetical sense, but no one cares enough to act. These very vessels, controlled by artificial minds that optimize for efficiency, take human laborers as captives, slaves, because people don’t break as easily as robots when exposed to saltwater. Again, a story that most have heard but none do anything to prevent.
While taken to an extreme, the parallels to our own society are unmistakable. Do we not actively destroy our environment through consumption? Do we not demand efficiency at the cost of human suffering, content as long as it remains out of sight? We know this is unsustainable, but we do not act: arguably because it is beyond our programming. A common critique of machine learning models is that they cannot generalize or think creatively. An ML model is trained on certain data, finds patterns within that data, and makes “predictions” that are ultimately just extrapolations of the data is has seen. But are humans different? We are trained to operate in a society. We are able to see the harm we cause, predict the problems that will arise, but the solutions are outside our training data.
Having painted a rather harsh picture of humanity, I’ll walk back from that a bit. I do believe that we are capable of creative thought and invention, of going beyond our “programming.” However, there is a tendency, especially as a group, to embrace un-thinking. It’s simply easier to avoid the hard questions and follow conventional wisdom, but we must fight against this tendency in order to progress.
The connection between environment and thought
So far I haven’t even really touched on the octopus! What really struck me about the idea of a conscious octopus is how different their environment is from ours. We live on land, with two limbs for movement and two for manipulation of objects/tools. We have a centralized nervous system, with a brain that reads sensory inputs from the limbs and issues commands. We communicate via sound, a spoken language, which we use extensively due to our highly social existence. We interact with other humans constantly (unless we choose not to). None of this is true for the octopus. It lives almost entirely underwater, with a few exceptions where an octopus ventures onto land to reach a tide pool. It navigates this underwater world with 8 limbs, each of which can be used for movement or interaction interchangeably. Its nervous system is far more decentralized, with more neurons in its arms than its head. Sound is likely not a primary sense for them and there isn’t evidence for communication via sound. They are highly solitary creatures, with short lifespans of a few years, and it is entirely possible for an octopus to live most of its life without encountering another octopus. Even when mating, there is no long-term association–in fact, the female might eat the male afterward!
Ok, so an octopus and human are very different. Why am I so fixated on this? One idea the book explores is the connection between how one experiences/interacts with the world and how one thinks about it. It reminds me of how Ludwig Wittgenstein famously discusses in his 1950s book ‘Philosophical Investigations’ that ‘If a lion could speak, we could not understand him’. The central idea is this: we have no shared context which could be used to make sense of what the other is saying. Without some basic ideas and concepts that we can both understand, where would communication even begin?
It is very common in sci-fi to have some kind of translator that allows for easy communication between alien species. This is convenient from a story perspective, but anyone who has every tried translating knows it’s never that straightforward. Even in going from one human language to another, we frequently encounter words that do not exist in both. These words can tie in to concepts that are culturally rooted, tied in to an experience that is not shared between the two different native speakers. Now imagine that but with different species. When we have such different lives and experiences, how could we even begin to communicate with the lion? The octopus? Going further, a truly extraterrestrial species may have even less in common with us. This is a concept that I think is handled really well by Arrival, the movie based on Ted Chiang’s short story: Story of Your Life. Without writing another essay about this, it presents an alien species that doesn’t think, or communicate, linearly, posing a seemingly insurmountable barrier to communication. It also touches on the idea of how language itself may shape the way in which we think.
Concluding thoughts
This has been a rather meandering post. I don’t have a particular concluding message, but it felt odd to stop in the middle of a train of thought. At the very least, writing this has certainly helped me process my thoughts on the novel. What I’ve said here really only scratches the surface, and I do love when a book gets me thinking about so many different things.