Navigation:   TowerBells Home  => Unrelated Essays  => Stupidity

The stupidity of hackers

As you, dear reader, undoubtedly know, stupidity is everywhere in our world, and we all learn to live with that sad fact.  However, sometimes it pops up in places that are so unexpected that they somehow demand comment.  That's what provoked this essay.

When all goes well with a Website, the visiting public sees its intended public face, and does not think further about anything except whether its content meets the visitor's needs, and maybe whether its appearance is nice and it is easy to use.  But a Webmaster must deal with a lot of different tools in order to present that intended public appearance, and other tools are available to study how effective that presentation is.

The combination of hardware and software that makes a Webmaster's work available as pages that are displayable on computers (and other smart devices) on the World Wide Web is called a Webserver, and one of the things that a Webserver does in the background is to maintain logs of all the pages (and other files) that are requested of it.  Those requests can come either from human visitors (who typically are using Web browsers such as Safari or Firefox) or from Web crawlers.  Some of those crawlers are legitimate, like the robots which collect the information that supplies search engines such as Google or DuckDuckGo.  But some of them probably belong to hackers who are looking for Websites that they can break into.  Their purpose might be to steal information, to cause damage, to alter the public appearance of the Website for their own amusement or benefit, or even to take it hostage in order to extract a ransom from its owners.

An access log records all requests received by the Webserver, and indicates the disposition of each request.  So a Webmaster can analyze the daily access logs to find which pages are visited most often, etc.  Sometimes it is possible to learn other things, too, but I digress.

Included in the access logs is information about which requests could not be satisfied, and why.  But digging that information out of voluminous access logs is tedious, and Webservers typically ease that burden by creating separate error logs.  Here is where the stupidity referenced in the title of this essay becomes quite obvious.

While a few errors are "normal" (e.g., a visitor mis-typed a URL, or followed a broken link), the great majority of them are quite evidently from Web crawlers that are looking for something that never existed on this Website.  That's not bad in itself, but what's really stupid is that after failing to find one such thing, the same robot keeps asking for the same file, over and over again.  Or it asks for a whole series of things in spite of being told that the first item in the series does not exist.  If that robot is making the same attempts on many other Websites, it is wasting a whole lot of its owner's time and energy, which just demonstrates how stupid that owner is.  (It's also wasting time and energy for both my Webserver and me, which is just plain unkind.)

Another form of stupidity shows up in requests for pages that do exist but in a different directory from that requested.  Mangling a URL that way makes no sense!

Yet another form of stupidity consists of making everything in a URL be in lower case.  While that may work on some Webservers (I've heard that Windows machines are not case sensitive), it certainly does not work on mine, nor on any other Linux machine (and there are many of them).  I used mixed-case filenames for readability in maintenance.  So what's the point of changing what worked to something that won't?

What's most insulting is when a request claims to result from following a link from somewhere on my own Website, when either that somewhere does not exist or it's a valid page that does not contain a link to the requested page.  That is just incomprehensibly stupid, and maybe malicious to boot!

The saddest form of stupidity that I see is when a request is due to someone following a link from another Website, where the person who created that link garbled it somehow, and obviously never checked to verify that their link was correct.  It's sad because it disappoints the visitor who was looking for something interesting, but failed to find it because of someone else's stupidity (or laziness) in publicizing a bad link.  I try hard not to do that in my own work!  And occasionally I am able to contact the owner of such a Website to get that link corrected.

Unfortunately, I have to wade through all of that stupidity in order to find the rare cases where an entry in an error log does reflect some error on my part.  I fix those promptly.

P.S.  If you have done enough Websurfing to encounter the dreaded "Error 404 - page not found" then you have participated in one of the events that can be recorded in an error log.  On many Websites, that happens because a page has been renamed or moved or simply deleted.  I regard that as sloppiness on the part of the Webmaster, and I try very hard not to let that happen on this Website.  When I must rename or delete a page (and that does happen), I add a note about that to a page that is specifically designed to handle Error 404 events.  Thus a visitor who is looking for the former page can learn what happened, and take appropriate action.

P.P.S.  While this essay is unlikely to reduce the stupidity of any hacker, writing it has been a bit of a catharsis.  And since you have read this far, I hope that you have found it enlightening.


This page was created on 2025/01/23, and last revised on 2025/04/19.

Please send comments or questions about this page to the Webmaster/owner.
Use similar links on other pages to send specific comments or questions about those pages.