Community spam

September 6, 2006 | Leave a comment

http://theinternetispeople.com/2006/09/06/community-spam/

This delightful job offer on Getafreelancer will pay for 50 hours of your time in order to crack a particular CAPTCHA solution. In other words, they want to break any site using that particular CAPTCHA so a script can automatically join and do who-knows-what.

This is an issue for any community site, whether in education (where spammers could potentially reach an entire student body without adequate protection) or out in the wider Internet, and it’s a constant battle. Unfortunately a recent study noted that spammers touting penny stocks have a real effect on the markets – and tend to get between a 5% and 6% return on their investment each time. Spamming, at least for now, works.

There are ways to fight it, using a combination of AI and the power of collective knowledge. Akismet is apparently successful on comment spam (which is just as well, as its front page is alarmingly claiming that 93% of all tracked comments are spam) – but how can networks collaborate to stop users coming in and clogging message board postings, blogs etc with spam? Akismet could be harnessed for this too, but it’s a paid service. Could there be some kind of peer to peer service that we could query to get the validity of a user’s email address (which could match a distinct address or an entire server) or likely propensity to deliver spam onto a community?

Perhaps, rather than a blacklist, this could be delivered as a warning value with the following rules:

  1. Everyone in participating systems gets a value somewhere between 1 (extremely unlikely to spam) and 100 (definitely a spammer). This could be calculated as an aggregated value from various trusted servers within the network.
  2. Only systems that validate email addresses may participate.
  3. Ranks are only calculated using sites where that particular address has actually registered.
  4. When a system polls the network to determine a user’s status, they get two results back: the aggregated rank, and the number of sites contributing to that value.

Or similar.

There are definitely questions – how to prevent abuse, how to maintain privacy, and a way to contest unfair designations – but it seems to me we need a solution to this problem, quickly. CAPTCHAs and software tools that simply test for automated scripts aren’t going to hold the tide back forever, but behind every spambot is a person with a set of resources, and if we club together, we can as a community create a resource to fight them off.

Most Commented Posts

0 Comments

No comments yet.

Leave a comment