I’ve been thinking a lot about sentiment analysis recently; for a number of reasons:
Datasift (a new product by Tweetmeme, currently in rather exclusive alpha) offers sentiment analysis as part of their streaming filters for Twitter.
Valley-based Fflick are developing their own sentiment engine via machine learning algorithms. The current manifestation of this is a movie review site, but they will be pursuing other verticals – no doubt once the tech has improved and they’ve got some $$s.
Qwiki, which I wrote about yesterday, appears to be on the artificial intelligence trail too. The task of establishing whether content is relevant/important/canonical is an incredibly daunting task to automate.
Finally [prompting this post] this morning I see a product launched by Lewis PR: Chatterscope monitors brand mentions and performs sentiment analysis – A free alternative to Radian6 and Alterian, perhaps? Monitoring and alert functionality is obviously useful, but sentiment analysis – that’s the marketing holy grail, and I’ve always been
Chatterscope’s sentiment engine appears to be little more than keyword proximity matched against a list. I’m not going to berate them, because if a PR agency came to me with this brief and a few thousand pounds, they’d end up with something very similar. It’s a very hard problem to solve – personally I’m not sure I’d attempt to solve it.
Last Christmas I wrote a simple app (called the XmasFactor) that identified who [on Twitter] was in a Christmassy mood. It was very obvious, very quickly, that it was going to be hugely flawed. Just mentioning “jingle bells” does not imply sentiment – “Jesus, I wish they would stop singing jingle bells”. It was only a bit of fun, but that won’t stop people calling you out when something doesn’t work.
The complexity of language is such that words alone do not always indicate sentiment accurately. In the first instance, you need to parse a sentence to infer meaning into the mere words. We’re talking about a pretty big margin of error here. Consider –
- Grammar: “Far from the best phone on the market”
- Sarcasm: “No reception in here. great, thanks Orange”
- Context: “Great big scratch on my iPhone”
- Slang: “The new iPhone is bad ass”
It’s clear that these problems require human-like thinking – the kind of thinking that comes from learning. Continual learning, because human culture itself is a moving target. Fflick have people with PhDs in artificial intelligence developing learning machines. I consider myself a good developer, but this kind of stuff is so far out of my league, it makes my eyes water.
There are numerous products and software libraries that provide the natural language processing required for this kind of analysis, so it’s not entirely beyond the reach of the rest of us. However, the fact that a company like Fflick is developing its own technology, suggests these tools have a long way to go, and there’s a lot up for grabs.
Companies tend to be shady about what third party software they use, because [I’d imagine] this knowledge may expose weaknesses in their products, as well as making life easier for their competitors. It appears common for firms to say “we’ve developed our own algorithm” — The allure of the ‘trade secret’.
Nick Halstead of Tweetmeme [when asked at DevNest] wouldn’t say what sentiment engine was employed in Datasift, but alluded to some third party software being used. (anecdotal, sorry, but I was there).
I am told that Alterian developed their own engine, and employ some human-power, i.e. some manual analysis. Radian6’s algorithm was apparently built in-house, whatever that means. And their dashboard seems to provide some sentiment tuning using a human-powered thumbs up/down approach. (I’ve not used either product).
Fflick also use some human-power via Amazon’s Mechanical Turk, although this is only to teach the machines, not to process the live data. See this video for some great insights into Fflick’s technology.
Good luck to anyone working in this field. Personally, I’m more interested in harnessing people power. Social networks give us this in abundance, and until artificial intelligence is truly accessible to me as a developer, I’m steering clear of it. Moreover, I’m not writing any cheques that my code can’t cash.