Friday 19 March 2010

On the dangers of Facebook

A correspondent writes to ask if linguistics has anything to offer in relation to the recent Facebook paedophile scandal and all the current discussion about panic buttons.

Of course it does. Indeed, the point has already come up on this blog, when I was talking about internet applications a few years ago (March 2007). In 2003 I developed an application called Chatsafe, using a technology I call a sense engine, which carried out a linguistic analysis of a conversation in order to identify dangerous or sensitive content. It worked fine. It processed a conversation in real time, and as dangerous content built up it would warn the user (or the user's parents) that there was a potential problem. The system needed a lot of testing, using real paedophile conversations, and as it's virtually impossible to get this kind of research done safely without clearance, I approached the Home Office. They said they'd get back to me but didn't. I approached a UK university department that specializes in such things and had a meeting with one of the researchers. No subsequent interest. I sent the idea to a mobile phone company after a scandal there. No response. A couple of years ago I sent it to a US child protection conference. Never heard anything further. I had hoped that someone somewhere would be following up the leads, but the Facebook disaster suggests not. I'd send it to Facebook now if I could work out how, but they hide their senior management contact procedure very well.

It's all very well offering a panic button, but how do you activate it? It's not enough to leave it up to the individual recipient, who may not be aware of a problem until it's too late. One needs an independent method. And as it's impossible for all conversations to be checked manually, it has to be done automatically.

Maybe that lass would be alive now if a system like Chatsafe had been used. That's why I'm writing this post. Maybe someone out there knows how to alert the social networking agencies to the relevance of a linguistic approach. It hasn't been for want of trying, on my part, and why the organizations most closely involved in this awful subject are ignoring the potential that linguistics has to offer is quite beyond me.

16 comments:

Annie said...

Couldn't any of these people be useful to contact? - http://www.facebook.com/press/info.php?execbios
It is such a shame that such an accurate and useful application should not get response. You must have worked so hard to develop it! It is really a great shame people out there should remain indifferent.

Mohammed UK said...

My first comment and sadly a negative one... I guess there is the question of whether there's "gold in them there hills"

Nobody wants such crimes, but the businesses you list are likely to be more worried about bottom line. Unfortunate.

As for government, however - they should be following up. Wouldn't there be a use for such tech in counter-terrorism operations and other crime related traffic?

Not just facebook as all text based networking could be included.

DC said...

Yes, the same kind of linguistic filtering technology could be used to screen any kind of sensitive domain - as has already happened in the advertising world, where Sitescreen now offers brand protection to advertisers wanting to block their ads from appearing on sites with objectionable content. The basic research has to be done first, of course, which means analysing the content of real conversations to determine their discourse characteristics. That is where the research arises, as to obtain genuine examples of such material requires cooperation from the relevant agencies.

DC said...

The exec page is informative - but there is no 'contact me' on it.

Anonymous said...

There's a TV show called Dateline that specializes in catching pedophiles in chatrooms. You can try contacting them. Otherwise, try AOL or Microsoft. The people who would be interested are sadly the ones who would probably profit financially rather than actually stopping the pedophiles. A software company would probably make you an offer on it rather than a social service agency.

Anonymous said...

Current software is woefully simplistic for auditing web content. As a recent example, during some research at my local library, I was denied access to a webpage about the 'Enola Gay' due to questionable content.

Dan said...

Well, apparently Facebook's to blame for some other bad things too: http://www.telegraph.co.uk/technology/facebook/7508945/Facebook-linked-to-rise-in-syphilis.html

憲次 said...

I'm from a secondary school, and a whole lot of my classmates are expressing the same kind of views on the subject. Quite sadly, they think that the subject of linguistics is pretty much a dead science that has no value in society.

I sent this blogpost to them to show them one of the useful applications of understanding linguistics into everything that requires language, including computer programmes! I'm so glad I found this blog to prove to them that the subject is really of relevance, and just as useful as the study of space dust and animals can be.

Just a question, is it possible to sense rude or even sarcastic remarks through a machine? I thought the computer needs the intelligence to understand the content to do so.

DC said...

Something rude? Possibly, as the vocabulary is usually a clear guide. Sarcasm, irony, and figurative language in general isn't capable of being handled automatically in a lexical way (which is what my system is based on), but requires a more subtle discourse and contextual (in the nonlinguistic sense) analysis. As most texts aren't entirely sarcastic, ironic, metaphorical, etc., the occasional instance of a piece of such language doesn't disturb the overall semantic classification of a web page - or a conversation, in the case of Chatsafe.

Steve UK said...

In taking a break from my study to read this always interesting, informative (sometimes even amusing!) blog I have to say that this post has quite severely shocked me. That a corporation built upon social principles would hide away their executives and not allow themselves to be 'socialised' is an interesting linguistic conundrum in of itself.

However, perhaps the networking aspect of the site could be used to good affect here? I would be happy to start a 'Facebook group' in support of the testing of this technology and to maintain it. This is probably the only viable and effective way of getting noticed by Facebook executives who I am sure are far from monsters and have a conscience just like the rest of us that are horrified by recent events.

So, with your backing, do you think a group would help?

DC said...

Well I'd hope so. Certainly Facebook inspired viral campaigns have succeeded in the past. So thanks for the suggestion, and I'd of course support any campaign that might help to make a difference.

Terry said...

Somebody was having a go at this 18 months ago at Lancaster University: see the Daily Telegraph report here.

DC said...

This was a very different kind of approach, using AI and algorithms. Like all AI research, it offers great promise for the future. But we want an approach which works now. One based on lexical semantics, such as the one I devised, is much less sophisticated than an AI solution will ultimately be, but it works. That's because it's based on hard linguistic graft, not algorithms.

fffree said...

Their HQ's are

Facebook, Inc.
1601 S. California Avenue
Palo Alto, CA 94304
United States of America

Just pop the name of the relevant executive in with it. Hope it helps (:

Carl Morris said...

DC, maybe try sending it to this blog which often covers social network services:
http://www.readwriteweb.com/about/

Anonymous said...

Could you please approach the Australian Government before they try to implement inept, expensive internet filters that you can currently learn to get around at night classes.