Google Panda gets two more black eyes -

No one probably wants the Panda extinct as much as webmaster and small publisher Patrick Jordan.

Jordan runs a blog called justanotheripadblog.com. While we're about to highlight an individual case it's worth noting there are plenty more small publishers affected. A simple scan through the support forums will show you as much.

We saw Jordan's sorry tale outlined on SEOBook.com, an SEO blog which is aghast at Google's odd algorithm rejigging.

Like many webmasters, Jordan noticed a severe dip in organic traffic. A Google search reviewer told him a reason for his website dropping could be explained by a quick, simple test. The reviewer typed a sentence from Jordan's website into Google and found it was populated all over the web.

PandaWhich is a strange reason to cite - since that is precisely the problem Jordan was trying to point out anyway. He tested the same sentence himself, where he lists higher ranking results all as 'scraper' sites.

Scraper sites automatically re-post original content without consent. Currently they are ranking higher for established publications in the UK, including Pocket-Lint, Reg Hardware and Computer Weekly. This fellow's blog fell into the same category and was automatically penalised while the content farms the update was supposed to kill continued to rank higher. It's a quality thing, is it?

A serious problem with Panda is that Google, as a web search outfit and self-confessed organiser of information, can not write any code that can detect good or bad "content". The idea is admirable but in practice, it reeks. Microsoft before it attempted to curb what it didn't like, and fortunately, the company's efforts fell flat on its keister. Bing as a possible saviour to Google's arrogant monopoly feels like Dada in the 21st century.

Likewise by culling its credibility with start-ups there is a chance Google is giving the upper-hand to its disdainful rival in Rupert Murdoch. Google hasn't figured out publishing and when engineers are at the helm telling people what they should and shouldn't read, it never will.

A comms spokesperson for Google could not answer us when we asked what the heck was going on. The spokesperson said he was not able to talk to us about the technical side and repeatedly directed us to that vague Google Panda blog post which we reported on earlier. What it does not answer, and what the internal Google spokesperson could not answer, is how copyright-violating scraper sites rank above their sources.

The UK's Gareth Evans, Global Comms & Public Affairs in Google Search, Commerce & Social reluctantly replied to TechEye, taking three days to fob us off: "Apologies for the delayed response. As you probably guessed, I'm not going to be able to give you much more info than you've already received.

"But on your question about what constitutes a 'low quality' site, we published a new blogpost on Friday to address that very issue in as much detail as we can.

"I hope this is of some use."

It wasn't.

To add insult to injury, SEO Book reports that Jordan, sick of being ignored, resorted to AdWords to bring some further revenue to his site - which Google denied. Meanwhile, scraper sites that rip off Jordan's content, much like with other small publishing houses, continue to to profit from stolen articles full of Google's own adverts.

We have asked Mountain View several more questions and wait for a response.