OK. Computational knowledge. Cool.
Cellular Automata. Cool.
But we’re missing a couple of major points that I think will help to put this in perspective.
1. It’s A Fact!
Computational knowledge in the form Wolfram Alpha purports to have can provide answers to factual questions. Even avoiding for now the deep, often times fruitless, but endlessly entertaining conversation about what a fact is, I have to wonder what percentage of our questions on Google – even those that are looking for factual answers – would be best served by automated response answers rather than the provision of authoritative sites that contain the answers.
I’m not particularly concerned about HAL or Skynet, or Wolfram Alpha turning into an anime bogey monster of some sort – though it’s totally got the right name for it. (Mostly I’m worried about dying alone and why my sushi rice never comes out quite right, but that’s for another blog post.) But the point is that part of what we like about Google is that it doesn’t do the thinking for us, it just points us to a variety of answers, and we get to vet the answers – consciously or subconsciously – based on where they are coming from. Do we consider that source to be trustworthy based on its brand? Appearance? Content? Etc.
A system that calculates answers to our questions and then feeds them to us, even if it comes with annotations, is inherently flawed in my view – not just because it is by definition biased depending on what raw data it uses to begin its super fancy cellular automata knowledge computation, but also because that’s not how I think we like to use Google except for the simplest of factual searches – mathematical computations, weights and measures conversions, etc.
To be fair, it’s completely possible that I’m a geezer, and it’s just like how my parents can’t imagine working in a paperless office while I despise a papered one. Perhaps the young’erns coming up now, with their inherent trust and comfortability with the web and their total submission to and acceptance of lack of privacy, will not have these same qualms, and will joyfully suckle from the teet of computational knowledge… Wolfram Alpha being the mother wolf, and they being Romulus and Remus? The founders of a new empire for a new digital generation? Going too far with the metaphor, you say? Fine. A relevant and humorous cartoon to soothe you:
2. Ponies Are Broke
An even more apt question has to do with searcher intent and behavior on these computationally answerable questions. Think about it. The more easily answered a query is via this kind of factual computation, I have to imagine that the searcher in that case is less likely to want to buy anything or click on any advertisements. In short (and in hyperbole), who’s going to want to advertise on a math problem?
Let’s take a series of loosely related queries as examples:
1. Search query = “define culinary school”
Easy to compute.
Searcher intent is information gathering, explanation of the term, but probably a low conversion rate in terms of being interested in enrolling in culinary schools. This is very early information gathering by a user who knows next to nothing about the given topic – very early in the buy cycle.
Less attractive to advertise on.
2. Search query = “culinary school”
Tougher to compute.
Searcher intent may be any number of things now. Some searchers may still be interested in the definition, or ‘what is a culinary school’, but many others may be interested in finding culinary schools near them, getting enrollment information, or in general they may be users who are already entertaining the thought of going to culinary school. Percentages of these users may be willing to perform various conversions.
Getting pretty attractive to advertise on.
3. Search query = “best culinary school”
Impossible to compute.
Searcher intent is now more clear. Many of these users are considering attending culinary school, or looking for information about the schools with an eye toward selecting one as the best. The selection of ‘best’ is not objective in this case, and a system like Wolfram must be second guessed if it thinks it can determine a ‘best’ in this scenario for every individual searcher without bias (not that they have claimed to be able to do so).
Extremely attractive to advertise on.
For the record – I don’t think anyone should go to culinary school until they’ve worked at a restaurant for at least a year. But that’s also another blog post.
So if Wolfram Alpha is best at answering questions that no one feels like competing for, then it may work really well, but it will never overtake Google because it will never be able to turn itself into the same kind of revenue machine. So if by Google Killer, the Wolfram Alpha fanboys mean “Google Acquisition,” then fine. Maybe. But otherwise I’ll be excited to learn more, hopeful that it ends up being as cool as it sounds, and interested as to whether they make me eat my words (in which case I will consider going to culinary school first, so as to be able to douse them in a nice beurre blanc first).