I like to think (it has to be!) of a cybernetic ecology where we are free of our labors and joined back to nature, returned to our mammal brothers and sisters, and all watched over by machines of loving grace.
Brautigan’s poem inspires and terrifies me at the same time. It’s a reminder of how creepy a world full of devices that blur the line between life and objects could be, but there’s also something appealing about connecting more closely to the things we build. Far more insightful people than me have explored these issues, from Mary Shelley to Phillip K. Dick, but the aspect that has fascinated me most is how computers understand us.
We live in a world where our machines are wonderful at showing us near-photorealistic scenes in real time, and can even talk to us in convincing voices. Up until recently though, they’ve not been able to make sense of images or audio that are given to them as inputs. We’ve been able to synthesize voices for decades, but speech recognition has only really started working well in the last few years. Computers have been like Crocodile Sales Reps, with enormous mouths and tiny ears, great at talking but terrible at listening. That means they can be immensely frustrating to deal with, since they seem to have no ability to do what we mean. Instead we have to spend a lot of time painstakingly communicating our needs in a form that makes sense to them, even it is unnatural for us.
This process started with toggling switches on a control panel, moved to punch cards, teletypes, CRT terminals, mouse-driven GUIs, swiping on a touch screen and most recently basic voice interfaces. Each of these steps was a big advance, but compared to how we communicate with other people, or even our pets, they still feel clumsy.
To continue reading this article, click here.