The Orphaned Internet Domain Risk

I have clicked on company websites of social media acquaintances, and something is not right: Slight errors in formatting, encoding errors for special German characters.

Then I notice that some of the pages contain links to other websites that advertize products in a spammy way. However, the links to the spammy sites are embedded in this alleged company websites in a subtle way: Using the (nearly) correct layout, or  embedding the link in a ‘news article’ that also contains legit product information – content really related to the internet domain I am visiting.

Looking up whois information tells me that these internet domain are not owned by my friends anymore – consistent with what they actually say on the social media profiles. So how come that they ‘have given’ their former domains to spammers? They did not, and they didn’t need to: Spammers simply need to watch out for expired domains, seize them when they are available – and then reconstruct the former legit content from public archives, and interleave it with their spammy messages.

The former content of legitimate sites is often available on the web archive. Here is the timeline of one of the sites I checked:

Clicking on the details shows:

  • Last display of legit content in 2008.
  • In 2012 and 2013 a generic message from the hosting provider was displayed: This site has been registered by one of our clients
  • After that we see mainly 403 Forbidden errors – so the spammers don’t want their site to be archived – but at one time a screen capture of the spammy site had been taken.

The new site shows the name of the former owner at the bottom but an unobtrusive link had been added, indicating the new owner – a US-based marketing and SEO consultancy.

So my take away is: If you ever feel like decluttering your websites and free yourself of your useless digital possessions – and possibly also social media accounts, think twice: As soon as your domain or name is available, somebody might take it, and re-use and exploit your former content and possibly your former reputation for promoting their spammy stuff in a shady way.

This happened a while ago, but I know now it can get much worse: Why only distribute marketing spam if you can distribute malware through channels still considered trusted? In this blog post Malwarebytes raises the question if such practices are illegal or not – it seems that question is not straight-forward to answer.

Visitors do not even have to visit the abandoned domain explicitly to get hacked by malware served. I have seen some reports of abandoned embedded plug-ins turned into malicious zombies. Silly example: If you embed your latest tweets, Twitter goes out-of-business, and its domains are seized by spammers – you Follow Me icon might help to spread malware.

If a legit site runs third-party code, they need to trust the authors of this code. For example, Equifax’ website recently served spyware:

… the problem stemmed from a “third-party vendor that Equifax uses to collect website performance data,” and that “the vendor’s code running on an Equifax Web site was serving malicious content.”

So if you run any plug-ins, embedded widgets or the like – better check out regularly if the originating domain is still run by the expected owner – monitor your vendors often; and don’t run code you do not absolutely need in the first place. Don’t use embedded active badges if a simple link to your profile would do.

Do a painful boring inventory and assessment often – then you will notice how much work it is to manage these ‘partners’ and rather stay away from signing up and registering for too much services.

Update 2017-10-25: And as we speak, we learn about another example – snatching a domain used for a Dell backup software, preinstalled on PCs.

Other People Have Lives – I Have Domains

These are just some boring update notifications from the elkemental Webiverse.

The elkement blog has recently celebrated its fifth anniversary, and the punktwissen blog will turn five in December. Time to celebrate this – with new domain names that says exactly what these sites are – the ‘elkement.blog‘ and the ‘punktwissen.blog‘.

Actually, I wanted to get rid of the ads on both blogs, and with the upgrade came a free domain. WordPress has a detailed cookie policy – and I am showing it dutifully using the respective widget, but they have to defer to their partners when it comes to third-party cookies. I only want to worry about research cookies set by Twitter and Facebook, but not by ad providers, and I am also considering to remove social media sharing buttons and the embedded tweets. (Yes, I am thinking about this!)

On the websites under my control I went full dinosaur, and the server sends only non-interactive HTML pages sent to the client, not requiring any client-side activity. I now got rid of the last half-hearted usage of a session object and the respective cookie, and I have never used any social media buttons or other tracking.

So there are no login data or cookies to protect, but yet I finally migrated all sites to HTTPS.

It is a matter of principle: I of all website owners should use https. Since 15 years I have been planning and building Public Key Infrastructures and troubleshooting X.509 certificates.

But of course I fear Google’s verdict: They have announced long ago to HTTPS is considered a positive ranking by its search engine. Pages not using HTTPS will be tagged as insecure using more and more terrifying icons – e.g. http-only pages with login buttons already display a striked-through padlock in Firefox. In the past years I migrated a lot of PKIs from SHA1 to SHA256 to fight the first wave of Insecure icons.

Finally Let’s Encrypt has started a revolution: Free SSL certificates, based on domain validation only. My hosting provider uses a solution based on Let’s Encrypt – using a reverse proxy that does the actual HTTPS. I only had to re-target all my DNS records to the reverse proxy – it would have been very easy would it not have been for all my already existing URL rewriting and tweaking and redirecting. I also wanted to keep the option of still using HTTP in the future for tests and special scenario (like hosting a revocation list), so I decided on redirecting myself in the application(s) instead of using the offered automated redirect. But a code review and clean-up now and then can never hurt 🙂 For large complex sites the migration to HTTPS is anything but easy.

In case I ever forget which domains and host names I use, I just need to check out this list of Subject Alternative Names again:

(And I have another certificate for the ‘test’ host names that I need for testing the sites themselves and also for testing various redirects ;-))

WordPress.com also uses Let’s Encrypt (Automattic is a sponsor), and the SAN elkement.blog is lumped together with several other blog names, allegedly the ones which needed new certificates at about the same time.

It will be interesting what the consequences for phishing websites will be. Malicious websites will look trusted as being issued certificates automatically, but revoking a certificate might provide another method for invalidating a malicious website.

Anyway, special thanks to the WordPress.com Happiness Engineers and support staff at my hosting provider Puaschitz IT. Despite all the nerdiness displayed on this blog I prefer hosted / ‘shared’ solutions when it comes to my own websites because I totally like it when somebody else has to patch the server and deal with attacks. I am an annoying client – with all kinds of special needs and questions – thanks for the great support! 🙂

Ice Storage Hierarchy of Needs

Data Kraken – the tentacled tangled pieces of software for data analysis – has a secret theoretical sibling, an older one: Before we built our heat source from a cellar, I developed numerical simulations of the future heat pump system. Today this simulation tool comprises e.g. a model of our control system, real-live weather data, energy balances of all storage tanks, and a solution to the heat equation for the ground surrounding the water/ice tank.

I can model the change of the tank temperature and  ‘peak ice’ in a heating season. But the point of these simulations is rather to find out to which parameters the system’s performance reacts particularly sensitive: In a worst case scenario will the storage tank be large enough?

A seemingly fascinating aspect was how peak ice ‘reacts’ to input parameters: It is quite sensitive to the properties of ground and the solar/air collector. If you made either the ground or the collector just ‘a bit worse’, ice seems to grow out of proportion. Taking a step back I realized that I could have come to that conclusion using simple energy accounting instead of differential equations – once I had long-term data for the average energy harvesting power of the collector and ground. Caveat: The simple calculation only works if these estimates are reliable for a chosen system – and this depends e.g. on hydraulic design, control logic, the shape of the tank, and the heat transfer properties of ground and collector.

For the operations of the combined tank+collector source the critical months are the ice months Dec/Jan/Feb when air temperature does not allow harvesting all energy from air. Before and after that period, the solar/air collector is nearly the only source anyway. As I emphasized on this blog again and again, even during the ice months, the collector is still the main source and delivers most of the ambient energy the heat pump needs (if properly sized) in a typical winter. The rest has to come from energy stored in the ground surrounding the tank or from freezing water.

I am finally succumbing to trends of edutainment and storytelling in science communications – here is an infographic:

Ambient energy needed in Dec/Jan/Fec - approximate contributions of collector, ground, ice

(Add analogies to psychology here.)

Using some typical numbers, I am illustrating 4 scenarios in the figure below, for a  system with these parameters:

  • A cuboid tank of about 23 m3
  • Required ambient energy for the three ice months is ~7000kWh
    (about 9330kWh of heating energy at a performance factor of 4)
  • ‘Standard’ scenario: The collector delivers 75% of the ambient energy, ground delivers about 18%.
  • Worse’ scenarios: Either collector or/and ground energy is reduced by 25% compared to the standard.

Contributions of the three sources add up to the total ambient energy needed – this is yet another way of combining different energies in one balance.

Contributions to ambient energy in ice months - scenarios.

Ambient energy needed by the heat pump in  Dec+Jan+Feb,  as delivered by the three different sources. Latent ‘ice’ energy is also translated to the percentage of water in the tank that would be frozen.

Neither collector nor ground energy change much in relation to the base line. But latent energy has to fill in the gap: As the total collector energy is much higher than the total latent energy content of the tank, an increase in the gap is large in relation to the base ice energy.

If collector and ground would both ‘underdeliver’ by 25% the tank in this scenario would be frozen completely instead of only 23%.

The ice energy is just the peak of the total ambient energy iceberg.

You could call this system an air-geothermal-ice heat pump then!

____________________________

Continued: Here are some details on simulations.

My Data Kraken – a Shapeshifter

I wonder if Data Kraken is only used by German speakers who translate our hackneyed Datenkrake – is it a word like eigenvector?

Anyway, I need this animal metaphor, despite this post is not about facebook or Google. It’s about my personal Data Kraken – which is a true shapeshifter like all octopuses are:

(… because they are spineless, but I don’t want to over-interpret the metaphor…)

Data Kraken’s shapeability is a blessing, given ongoing challenges:

When the Chief Engineer is fighting with other intimidating life-forms in our habitat, he focuses on survival first and foremost … and sometimes he forgets to inform the Chief Science Officer about fundamental changes to our landscape of sensors. Then Data Kraken has to be trained again to learn how to detect if the heat pump is on or off in a specific timeslot. Use the signal sent from control to the heat pump? Or to the brine pump? Or better use brine flow and temperature difference?

It might seem like a dull and tedious exercise to calculate ‘averages’ and other performance indicators that require only very simple arithmetics. But with the exception of room or ambient temperature most of the ‘averages’ just make sense if some condition is met, like: The heating water inlet temperature should only be calculated when the heating circuit pump is on. But the temperature of the cold water, when the same floor loops are used for cooling in summer, should not be included in this average of ‘heating water temperature’. Above all, false sensor readings, like 0, NULL or any value (like 999) a vendor chooses to indicate as an error, have to be excluded. And sometimes I rediscover eternal truths like the ratio of averages not being equal to the average of ratios.

The Chief Engineer is tinkering with new sensors all the time: In parallel to using the old & robust analog sensor for measuring the water level in the tank…

Level sensor: The old way

… a multitude of level sensors was evaluated …

Level sensors: The precursors

… until finally Mr. Bubble won the casting …

blubber-messrohr-3

… and the surface level is now measured via the pressure increasing linearly with depth. For the Big Data Department this means to add some new fields to the Kraken database, calculate new averages … and to smoothly transition from the volume of ice calculated from ruler readings to the new values.

Change is the only constant in the universe, paraphrasing Heraclitus [*]. Sensors morph in purpose: The heating circuit, formerly known (to the control unit) as the radiator circuit became a new wall heating circuit, and the radiator circuit was virtually reborn as a new circuit.

I am also guilty of adding new tentacles all the time, too, herding a zoo of meters added in 2015, each of them adding a new log file, containing data taken at different points of time in different intervals. This year I let Kraken put tentacles into the heat pump:

Data Kraken: Tentacles in the heat pump!

But the most challenging data source to integrate is the most unassuming source of logging data: The small list of the data that The Chief Engineer had recorded manually until recently (until the advent of Miss Pi CAN Sniffer and Mr Bubble). Reason: He had refused to take data at exactly 00:00:00 every single day, so learned things I never wanted to know about SQL programming languages to deal with the odd time intervals.

To be fair, the Chief Engineer has been dedicated at data recording! He never shunned true challenges, like a legendary white-out in our garden, at the time when measuring ground temperatures was not automated yet:

The challenge

White Out

Long-term readers of this blog know that ‘elkement’ stands for a combination of nerd and luddite, so I try to merge a dinosaur scripting approach with real-world global AI Data Krakens’ wildest dream: I wrote scripts that create scripts that create scripts [[[…]]] that were based on a small proto-Kraken – a nice-to-use documentation database containing the history of sensors and calculations.

The mutated Kraken is able to eat all kinds of log files, including clients’ ones, and above all, it can be cloned easily.

I’ve added all the images and anecdotes to justify why an unpretentious user interface like the following is my true Christmas present to myself – ‘easily clickable’ calculated performance data for days, months, years, and heating seasons.

Data Kraken: UI

… and diagrams that can be changed automatically, by selecting interesting parameters and time frames:

Excel for visualization of measurement data

The major overhaul of Data Kraken turned out to be prescient as a seemingly innocuous firmware upgrade just changed not only log file naming conventions and publication scheduled but also shuffled all the fields in log files. My Data Kraken has to be capable to rebuild the SQL database from scratch, based on a documentation of those ever changing fields and the raw log files.

_________________________________

[*] It was hard to find the true original quote for that, as the internet is cluttered with change management coaches using that quote, and Heraclitus speaks to us only through secondary sources. But anyway, what this philosophy website says about Heraclitus applies very well to my Data Kraken:

The exact interpretation of these doctrines is controversial, as is the inference often drawn from this theory that in the world as Heraclitus conceives it contradictory propositions must be true.

In my world, I also need to deal with intriguing ambiguity!

My Flat-File Database

A brief update on my web programming project.

I have preferred to create online text by editing simple text files; so I only need a text editor and an FTP client as management tool. My ‘old’ personal and business web pages are currently created dynamically in the following way:
[Code for including a script (including other scripts)]
[Content of the article in plain HTML = inner HTML of content div]
[Code for writing footer]

The main script(s) create layout containers, meta tags, navigation menus etc.

Meta information about pages or about the whole site are kept in CSV text files. There are e.g. files with tables…

  • … listing all of pages in each site and their attributes – like title, key words, hover texts for navigation links or
  • … tabulating all main properties of all web sites – such as ‘tag lines’ or the name of the CSS file.

A bunch of CSV files / tables can be accessed like a database by defining the columns in a schema.ini file, and using a text driver (on my Windows web server). I am running SQL queries against these text files, and it would be simple to migrate my CSV files to a grown-up database. But I tacked on RSS feeds later; these XML files are hand-crafted and basically a parallel ‘database’.

This CSV file database is not yet what I mean by flat-file database: In my new site the content of a typical ‘article file’ should be plain text, free from code. All meta information will be included in each file, instead of putting it into the separate CSV files. A typical file would look like this:

title: Some really catchy title
headline: Some equally catchy, but a bit longer headline
date_created: 2015-09-15 11:42
date_changed: 2015-09-15 11:45
author: elkement
[more properties and meta tags]
content:
Text in plain HTML.

The logic for creating formatted pages with header, footer, menus etc. has to be contained in code separate from these files; and text files needs to be parsed for meta data and content. The set of files has effectively become ‘the database’, the plain text content being just one of many attributes of a page. Folder structure and file naming conventions are part of the ‘database logic’.

I figured this was all an unprofessional hack until I found many so-called flat-file / database-less content management systems on the internet, intended to be used with smaller sites. They comprise some folders with text files, to be named according to a pre-defined schema plus parsing code that will extract meta data from files’ contents.

Motivated by that find, I created the following structure in VB.NET from scratch:

  • Retrieving a set of text files based on a search criteria from the file system – e.g. for creating the menu from all pages, or for searching for one specific file that should represent the current page – current as per the URL the user entered.
  • Code for parsing a text file for lines having a [name]: [value] structure
  • Processing nice URL entered by the user to make the web server pick the correct text file.

Speaking about URLs, so-called ASP.NET Routing came in handy: Before, I had used a few folders whose default page redirects to an existing page (such as /heatpump/ redirecting to /somefolder/heatpump.asp). Otherwise my URLs all corresponded to existing single files.

I use a typical blogging platform’s schema with the new site: If users enters

/en/2015/09/15/some-cool-article/

the server accesses a text text file whose name contains language, year, such as:

2015-09-15_en_some-cool-article.txt

… and displays the content at the nice URL.

‘Language’ is part of the URL: If a user with a German browsers explicitly accesses an URL starting with /en/ , the language is effectively set to English. However, If the main page is hit, I detect the language from the header sent by the client.

I am not overly original: I use two categories of content – posts and pages – corresponding to text files organized in two different folders in the file system, and following different conventions for file names. Learning from my experience with hand-crafted menu pages in this this blog here, I added:

  • A summary text included in the file, to be displayed in a list of posts per category.
  • A list of posts in a single category, displayed on the category / menu page.

The category is assigned to the post simply as part of the file name; moving a post to another category is done by renaming it.

Since I found that having to add my Google+ posts to just a single Collection was a nice exercise I limit myself to one category per post deliberately.

Having built all the required search patterns and functions for creating lists of posts or menus or recent posts, or for extracting information from specific pages as the current or the corresponding page in the other language …  I realized that I needed a better and clear-cut separation of a high-level query for a bunch of attributes for any set of files meeting some criteria from the lower level doing the search, file retrieval, and parsing.

So why not using genuine SQL commands at the top level – to be translated to file searches and file content parsing on the lower level?

I envisaged building the menu of all pages e.g. by executing something like

SELECT title, url, headline from pages WHERE isMenu=TRUE

and creating the list of recent posts on the home page by running

SELECT * FROM posts WHERE date_created < [some date]

This would also allow for a smooth migration to an actual relational database system if the performance of file-based database would not be that great after all.

I underestimated the efforts of ‘building your own database engine’, but finally the main logic is done. My file system recordset class has this functionality (and I think I finally got the hang of classes and objects):

  • Parse a SQL string to check if it is well-formed.
  • Split it into pieces and translate pieces to names of tables (from FROM) and list of fields (from SELECT and WHERE).
  • For each field, check (against my schema) if the field should be encoded in the file’s name of if it was part of the name / value attributes in the file contents.
  • Build a file search pattern string with * at the right places from the file name attributes.
  • Get the list of files meeting this part of the WHERE criteria.
  • Parse the contents of each file and exclude those not meeting the ‘content fields’ criteria specified in the WHERE clause.
  • Stuff all attributes specified in the SELECT statement into a table-like structure (a dataTable in .NET) and return a recordset object –  that can be queried and handled like recordsets returned by standard database queries – that is: Check for End Of File, or MoveNext, return the value of a specific cell in a column with specific name.

Now I am (re-)creating all collections of pages and posts using my personal SQL engine, In parallel I am manually sifting through old content and turning my web pages into articles. To do: The tag cloud and handling tags in general, and the generation of the RSS XML file from the database.

The new site is not publicly available yet. At the time of writing of this post, all my sites still use the old schema.

Disclaimers:

  • I don’t claim this is the best way to build a web site / blog. It’s also a fun project for the sake of having fun with developing it, exploring the limits of flat-file databases, forcing myself to deal with potential performance issues.
  • It is a deliberate choice: My hosting space allows for picking from different well-known relational databases and I have done a lot of SQL Server programming in the past months in other projects.
  • I have a licence of Visual Studio. Using only a text editor instead is a deliberate choice, too.

Interrupting Regularly Scheduled Programming …

(… for programming.)

Playing with websites has been a hobby of mine since nearly two decades. What has intrigued me was the combination of different tasks, appealing to different moods – or modes:

  • Designing the user interface and organizing content.
  • Writing the actual content, and toggling between creative and research mode.
  • Developing the backend: database and application logic.

I have distributed different classes of content between my three personal sites, noticed how they drifted apart or become similar again, and have migrated my content over and over when re-doing the underlying software.

e-stangl: screenshotsubversiv-screenshotradices.net: screenshotCurrently the sites run on outdated ASP scripts accessing CSV files as database tables via SQL. This was not a corporate software project – or too similar to one: I kept tacking on new features as I went, indulging in organically grown code. I hand-craft my XML feeds!

It is time to consolidate all this. I feel entitled, motivated, or perhaps even forced to migrate to a new ‘platform’, finally based on true object-oriented programming. Our other three sites run on the same legacy code, which I don’t want to support forever – I will migrate those sites as well in the long run.

So: I am developing a new .NET site from scratch, and I am going to merge my three personal sites into one.

However, I cannot bring myself to re-doing the code only and trying to migrate the content unchanged and as automated as possible. Every old article brings up memories and challenges me to comment on it and to reply to former self. I have to deal with all the three aspects listed above!

As for the layout, the challenge is to preserve the spirit and colors of all three sites – perhaps using something silly as three different layouts that visitors (especially: myself) can pick from, changing the layout based on category, or based on something random.

This is just a draft - but it seems I prefer to build on the 'subversive' layout.

This is just a first draft – building on the ‘subversive’ layout.

I will dedicate most of my ‘online time’ to this project; so I am taking a break from my usual blogging here and there – except from progress reports on this web migration project – and I will not be very active on social media.

Waging a Battle against Sinister Algorithms

I have felt a disturbance of the force.

As you might expect from a blog about anything, this one has a weird collection of unrelated top pages and posts. My WordPress Blog Stats tell me I am obviously an internet authority on: how rodents get into kitchen appliances, about the physics of a spinning toy, about the history of the first heat pump, and most recently about how to sniff router traffic. But all those posts and topics are eclipsed by the meteoric rise of the single most popular ever article, which was a review of a book on a subfield in theoretical physics. I am not linking this post or quoting its title for reasons you might understand in a minute.

Checking out Google Webmaster Tools the effect is even more pronounced. Some months ago this textbook review attracted by far the most Google search impressions and clicks. Looking at the data from the perspective of a bot it might appear as if my blog had been created just to promote that book. Which is, what I believe might actually had happened.

Concluding from historical versions of the book author’s website (on archive.org), the page impressions of my review started to surge when he put a backlink to my post on his page, some when in spring this year.

But then in autumn this happened.

Page impressions for this blog on Google Webmaster Tools, Sept to Dec.These are the impressions for searches from desktop computers (‘Web’), without image or mobile search. A page impression means that  the link had been displayed on Google Search Results pages to some user. The curve does not change much if I remove the filter for Web.

For this period of three months, that article I Shall Not Quote is the top page in terms of impressions, right after the blog’s default page. I wondered about the reason for this steep decline as I usually don’t see any trend within three months on any of my sites.

If I decrease the time slot to the past month that infamous post suddenly vanishes from the top posts:

Page impressions and top pages in the last monthIt was eradicated quickly – which can only be recognized when decreasing the time slot step-by-step. With a few days at the end of October / beginning of November the entry seems to have been erased from the list of impressions.

I sorted the list of results shown above by the name of the page, not by impressions. Since WordPress posts’ names are prefixed with dates you would expect to see any of your posts in that list somewhere, some of them of course with very slow scores. Actually, that list does include also obscure early posts from 2012 nobody ever clicks at.

The former top post, however, did not get a single impression anymore in the past month. I have highlighted the posts before and after in the list, and I have removed all filters for this one, thus also image and mobile search are taken into account. The post’s name started with /2013/12/22/:

Last month, top pages, recent top post missingChecking the status of indexed pages in total confirms that links have been recently removed:

Index status of this blogFor my other sites and blog this number is basically constant – as long as a website does not get hacked. As our business site actually has been a month ago. Yes, I only mention this in passing as I am less worried about that hack than about that mysterious penalizing of this blog.

I learned that your typical hack of a website is less spectacular that what hacker movies let you believe: If you are not a high-profile target, hacker-spammers leave your site intact, but place additional spammy pages with cross-links on your site to promote their links. You recognize this immediately by a surge of the number of URLs, of indexing activities, and – in case your hoster is as vigilant as mine – a peak in 404 not found errors after that spammy pages have been removed. This is the intermittent spike in spammy pages on our business page crawled by Google:

Crawl stats after hackI used all tools at my disposal to clean up the mess the hackers caused – those pages actually have been indexed already. It will take a while until things like ‘fake Gucci belts’ will be removed from our top content keywords, after I removed the links from the index by editing robots.txt, and using the Google URL removal tool and the URL parameters tool (the latter comes in handy as the spammy pages have been indexed with various query strings, that is: parameters).

I have expected the worst but Google have not penalized me for that intermittent link spam attack (yet?). Numbers are now back to normal after a peak in queries for those fake brand stuff:

Queries back to normal after clean-up.It was an awful lot of work to clean those URLs popping up again and again every day. I am willing to fight the sinister forces without too much whining. But Google’s harsh treatment of the post on this blog freaks me out. It is not only the blog post that was affected but also the pages for the tags, categories and archive entries. Nearly all of these pages – thus all the pages linking to the post – did not get a single impression anymore.

Google Webmaster Tools also tells me that the number of so-called Structured Data for this blog had been reduced to nearly zero:

Structured data on this blogStructured Data are useful for pages that show e.g. product reviews or recipes – anything that should have a pre-defined structure that might be presented according to that structure in Google search results, via nice formatted snippets. My home-grown websites do not use those, but the spammer-hackers had used such data in their link spam pages – so on our business site we saw a peak in structured data at the time of the hack.

Obviously WP blogs use those per design. Our German blog is based on the same WP theme – but the number of structured data there has been constant. So if anybody out there is using theme Twenty Eleven I would be happy to learn about your encounters with structured data.

I have read a lot: what I never wanted to know about search engine optimization. This also included hackers’ Black SEO. I recommend the book Spam Nation by renowned investigative reporter and IT security insider Brian Krebs, published recently. Whose page and book I will again not link.

What has happened? I can only speculate.

Spammers build networks of shady backlinks to promote their stuff. So common knowledge is of course that you should not buy links or create such network scams. Ironically, I have cross-linked all my own sites like hell for many years. Not for SEO purposes but in my eternal quest for organizing my stuff, keeping things separate, but adding the right pointers though, Raking the virtual Zen Garden etc. Never ever did this backfire. I was always concerned about the effect of my links and resources pages (links to other pages, mainly tech and science). Today my site radices.net which was once an early German predecessor of this blog is my big link dump – but still these massive link collections are not voted down by Google.

Maybe Google considers my posting and the physics book author’s website part of such a link scam. I have linked to the author’s page several times – to sample chapters, generously made available via download as PDFs, and the author linked back to me. I had refused to tie my blog to my Google+ account and claim ‘Google authorship’ so far as I don’t wanted to trade elkement for my real name on G+. Via Webmaster tools Google knows about all my domains but they might suspect I – a pseudo-anonymous elkement, using an @subversiv.at address on G+ – might also own the book author’s domain that I – diabolically smart – did not declare in Webmaster Tools.

As I said before, from a most objective perspective Google’s rationale might not be that unreasonable. I don’t write book reviews that often, my most recent were about The Year Without Pants and The Glass Cage. I rather write posts triggered by one idea in a book, maybe not even the main one. When I write about books I don’t use Amazon Affiliate marketing – as professional reviewers such as Brain Pickings or Farnam Street do. I write about unrelated topics. I might not match the expected pattern. This is amusing as long as only a blog is concerned but on principle it is similar as being interviewed by the FBI at an airport because your travel pattern just can’t be normal (as detailed in the book Bursts, on modelling human behaviour – a book I also sort of reviewed last year).

In short, I sometimes review and ‘promote’ books without any return on that. I simply don’t review books I don’t like as I think blogging should be fun. Maybe in an age of gamified reviews and fake forum posts with spammy signatures Google simply doesn’t buy into that. I sympathize. I learned that forums websites shod add a nofollow tag to any hyperlinks users post so that Google will now downvote the link targets. So links in discussion groups are considered spammy per se and you need to do something about it so that they don’t hurt what you – as a forum user – are probably trying to discuss or recommend in good faith. I already live in fear that those links some tinkerers set in DIYer’s forums (linking to our business site or my posts on our heating system) will be considered paid link spam.

However, I cannot explain why I can find my book review post on Google (thus generating an impression) when searching for site:[URL of the post]. Perhaps consolidation takes time. Perhaps there is hope. I even see the post when I use Tor Browser and a foreign IP address so this is not related to my preferences as a logged on Google user. But if there isn’t a glitch in Webmaster Tools, no other typical searcher encounters this impression. I am aware of the tool for disavowing URLs but I don’t want to report a perfectly valid backlink. In addition, that backlink from the author’s site does not even show up in the list of external backlinks which is another enigma.

I know that this seems to be an obsession with a first world problem: This was an post on a topic I don’t claim expertise or that I don’t consider strategically important. But whatever happens to this blog could happen to other sites I am more concerned about, business-wise. So I hope if is just a bug and/or Google Bots will read this post and will release my link. Just in case I mentioned your book or blog here, even if indirectly, please don’t backlink.

Perhaps Google did not like my ranting about encrypted search terms, not available to the search term poet. I dared to display the Bing logo back then. Which I will do again now as:

  • Bing tells me that the infamous post generates impressions and clicks
  • Bing recognizes the backlink
  • The number of indexed pages is increasing gradually with time.
  • And Bing did not index the spammy pages in the brief period they were on our hacked website.

Bing logo (2013)Update 2014-12-23 – it actually happened twice:

Analyzing the impressions from the last day I realize that Google has also treated my physics resources page Physics Books on the Bedside Table. Page impressions dropped and now that page which was the top one )after the review had plummeted) is gone, too. I had already considered to move this page to my site that hosts all those list of links (without issues, so far): radices.net, and I will complete this migration in a minute. Now of course Google might think I, the link spammer, am frantically moving on to another site.

Update 2014-12-24 – now at least results are consistent:

I cannot see my own review post anymore when I search for the title of the book. So finally the results from Webmaster Tools are in line with my tests.

Update 2015-01-23 – totally embarrassing final statement on this:

WordPress has migrated their hosted blogs to https only. All my traffic was hiding in the statistics for the https version which has to be added in Google Webmaster Tools as a separate website.