You have until March 4th 2013 to watch this excellent BBC4 documentary called “Google and the World Brain“.
If you live on Earth I recommend that you watch it. (Although you probably have to be in the UK to watch iPlayer)
Starting off with the problems of copyright in the digital age, it goes much further – it prompts you to think about everything from corporate responsibility to AI and the direction of human civilisation.
There’s too much to cover, but here’s a quick write-up with links to some of my favourite moments.
On the surface the documentary is about Google Books. More specifically it’s about their ambition to digitize every book ever written. The film discusses the project’s legal controversies and interviews numerous disgruntled copyright complainants. But this storyline is really just a good excuse to make a film about Google, its goals and its motives.
The wider context of the film is right there in the title. It opens with H G Wells describing (in 1937) the feasibility of an efficient index to all human knowledge, which he sums up as:
“A complete planetary memory for all mankind” – A World Brain.
76 years later and that sounds a lot like Google.
Cyber-utopian beliefs
The key word in that opening quote is “for” all mankind. Despite the surveillance society that might be required for such an invention, Wells genuinely believed this Word Brain would be a force for good and bring about world peace. This is presented as a parallel to Google’s mission to organise the world’s information and make it accessible to everyone, for free.
According to perfectly on-brand, brightly-coloured-jumper-wearing Google employee at 5:47, that fundamental mission is:
“Empowering everyone in this world with all the information they need.”
Nobody would deny that they’re achieving that, but it’s comforting to know they’re doing it because they love us and want us to be happy.
Some of the more balanced arguments in the documentary come from Evgeny Morozov. At 37:10 he alludes to a potential conflict between philanthropic vision and the reality of delivering it:
“… I do think that they genuinely believe in that mission. But they also happen to believe that nothing will get lost and no-one will get harmed if it’s Google who implement that mission.”
I like this view because it’s realistic without being cynical. Suppose we agree that Google’s motives are genuinely good, and they believe they can make money AND make the world a better place. Does that guarantee that they’re actually capable of delivering the future they’ve prescribed for us without disaster? And can we just trust them to do it without questioning them?
Choice and privacy
The film touches on some of the more contemporary issues around privacy, choice, and the dangers of a monopoly on information. But this isn’t really about copyright battles any more. Book scanning is just Google topping up its data from the pre-data age. Amassing all human knowledge requires more than just the written word, and the breaking down of barriers between us and information works both ways. (cue clip of Google Glass).
Author William Gibson points out the seemingly obvious at 35:34, that regardless of how much it benefits us, “Google is not ours“. I suppose we know that, the question is whether it’s a problem or not, and that seems to depend on whether you’re a fearful cynic or not.
Kevin Kelly (founder of Wired) apparently has no patience for our stone-age fear of the New. He describes copyright as an archaic, industrial age artifact and seems quick to defend technology’s right to do what it wants in the name of progress. At 53:35 he boldly states:
“If people find that the privacy policies of a particular technology are not to their liking, they should unplug it. They should retreat from the Internet. They should cut off their phone lines and they should go up and hide in a mountain. They have that choice.”
If this is a wry comment about the inevitability of progress then it might be profound. Except I sense he’s serious about our ultimatum. He implies that big tech companies will (and possibly should) push humankind forward however they see fit. And if we don’t like the direction they take then we can always opt out of civilization.
More input
It’s no secret that Google are interested in AI, machine learning, etc.. but if you had any doubt as to where this sits in their overall plan, Kevin Kelly gives us a secondhand quote at 57:30:
“I did talk to Larry Page when Google first started, because I was really perplexed about why would anybody make a new search engine […] And he said – Oh, it’s not to make a search engine, it’s to make an AI“
So there you have it – now it’s a thirdhand quote. Regardless of its accuracy, who better to make this a reality than Ray Kurzweil who appears in the film, (his on-screen caption possibly out of date as he’s now Director of Engineering at Google)
At 57:53 Kurzweil points out that books are probably more valuable [than all the other stuff on the Internet] in developing intelligent systems, because we’re more discerning about their content. He adds that parsing natural language is an important device in achieving that. I’m paraphrasing of course, but he leaves very little ambiguity around the goal of Google’s “quest to digitize all knowledge” – that it is indeed to “develop true AI“.
Human machine civilization
If you try to ignore the distopian sci-fi undertones, does knowing the goal change the mission or the motive? I’m not sure there’s a rational reason it should, but the ominous background music in the film would like you to reconsider.
Listening to Kurzweil you’d be forgiven for thinking his ideas are as much ambition as they are prophecy, but – and call me naive – he’s a human being too. I’m sure Ray, Larry and Sergey don’t want to be enslaved as a semi-conscious flesh batteries any more than we do. But again the documentary hints at those big questions – Are we safe in Google’s hands? Will AI make the world a better place? And above all – Do we have any say in the critical part we play?
The question on the table at 1:21:16 is whether large Internet companies are making our lives easier, or gaining power over us. Of course they’re doing both, as Evgeny Morozov sensibly points out –
“If they were not making our lives easier then no-one would be using their services. This is the complicated question that we’ll have to face down the road. […] The question is what the trade-offs will be.”
Perhaps there is no trade off. Perhaps a machine’s need for human data and our need for computing power makes this a perfect symbiosis. Perhaps. I mean, what could go wrong?
Utopia or Distopia?
Towards the end of the documentary the balanced arguments are left behind in favour of a pessimistic tone and dramatic effect. We fast-forward from H G Wells’ utopian visions to the end of his life when he has become utterly frustrated with mankind’s failures and writes his final book ‘Mind at the end of its tether‘.
Wells believed that we could achieve utopia through technology, but only if scientists worked together in the right way. It’s not hard to imagine why, in 1945, one might be having doubts about that ideology.
Just in case you felt the film was sitting on the fence at this point, it ends with an ominous quote from Wells:
“There is no way out. Or round. Or through”
Update: It seems the film is independent and has a life beyond the BBC. http://www.worldbrainthefilm.com/