Chris Messina’s post on the debate over machine-readable tweets is a densely-layered delight and really required reading for anyone curious about the future of Twitter and social search. In a nutshell, the guy who invented Twitter hashtags is arguing that an attempt to come up with a standardized, machine-readable lexicon for Twitter isn’t such a hot idea. “My greatest concern is that there won’t be enough people who can “speak” the “tweaked” syntax, leading to a lot of effort spent building parsers that will be data-starved,” he writes.
I think he’s right and wrong at the same time.
Turning Twitter into an algebra problem would be a huge mistake. As someone who uses Twitter to communicate with friends, rather than share info (I do enough of that at work) I can’t see myself bothering to learn a machine-friendly lexicon. It would be madness stuffed with discomfort.
But it could a fantastic basis for some other, new sort of social network. Imagine a sort of amalgamation of Digg, Twitter and Facebook that had tools in place to help ordinary users translate whatever was on their minds into machine speak. Now imagine all of those standardized messages, correlating with one another in a data-cloud. It could provide a unique window into our collective unconscious. We’re talking noosphere. Or it could just be another really cool way to do market research. Either way, I’m perilously close to having a nerd black-out from the excitement. Imagine the possibilities. Dream with me.
Does use machine-babble on Twitter have any hidden upsides I’m not considering? Would you use a network that translated your updates into machine-readable messages — provided you never had to learn the language? Am I off my rocker on this one?
Image credit, Elenathewise, via iStock