The appeal of using plain HTML pages

April 24, 2019

Once upon a time our local support site was a wiki, for all of the reasons that people make support sites and other things into wikis. Then using a wiki blew up in our faces. You might reasonably expect that we replaced it with a more modern CMS, or perhaps a static site generator of some sort (using either HTML or Markdown for content and some suitable theme for uniform styling). After all, it's a number of interlinked pages that need a consistent style and consistent navigation, which is theoretically a natural fit for any of those.

In practice, we did none of those; instead, our current support site is that most basic thing, a bunch of static .html files sitting in a filesystem (and a static file of CSS, and some Javascript to drop in a header on all pages). When we need to, we edit the files with vi, and there's no deployment or rebuild process.

(If we don't want to edit the live version, we make a copy of the .html file to a scratch name and edit the copy, then move it back into place when done.)

This isn't a solution that works for everyone. But for us at our modest scale, it's been really very simple to work with. We all already know how to edit files and how to write basic HTML, so there's been nothing to learn or to remember about managing or updating the support site (well, you have to remember where its files are, but that's pretty straightforward). Static HTML files require no maintenance to keep a wiki or a CMS or a generator program going; they just sit there until you change them again. And everything can handle them.

I'm normally someone who's attracted to ideas like writing in a markup language instead of raw HTML and having some kind of templated, dynamic system (whether it's a wiki, a CMS, or a themed static site generator), as you can tell from Wandering Thoughts and DWiki itself. I still think that they make sense at large scale. But at small scale, if I was doing a handful of HTML pages today, it would be quite tempting to skip all of the complexity and just write plain .html files.

(I'd use a standard HTML layout and structure for all the .html files, with CSS to match.)

(This thought is sort of sparked by a question by Pete Zaitcev over on the Fediverse, and then reflecting on our experiences maintaining our support site since we converted it to HTML. In practice I'm probably more likely to update the site now than I was when it was a wiki.)


Comments on this page:

By David Magda at 2019-04-24 06:52:54:

At this point in time, if one is going to use a non-commercial wiki software package, why not just go with WikiMedia? It's not too complex, and there's a low chance of abandonment given it runs Wikipedia.

As for static ("baked") sites, would one use plain HTML/text, or perhaps use a static generator to do a lot of the templating/formatting work in consistent way:

We used to have a MediaWiki, then all the tables broke in an upgrade and it just… decayed from there. Nobody wanted to edit a wiki with broken tables, so as our environment changed, it became wrong, so nobody relied on it. Eventually we ripped it out, and unfortunately, all the knowledge is inside our heads these days.

Prior to this, I was at a shop with a MediaWiki that was installed before the software was UTF-8 aware, on a database with default charset utf8 applied. It kept breaking the index length limits and needing manual fudging, and the attempt to upgrade to a version that was UTF-8 aware didn't go very well, either. The software never asked for utf-8, so the columns were assumed to be latin1 (which is actually Windows-1252) already.

Now, maybe things have improved in the last 10-15 years compared to when I had those experiences, but I'm pretty sure if there's no staff that can have a time-slice dedicated to wiki-maintenance or CMS maintenance (hello yes, we also have an extant Drupal 7 system), it's just going to rot.

From 216.154.28.151 at 2019-04-24 16:37:24:

Now, maybe things have improved in the last 10-15 years [...]

Yes, they have.

Another nice thing about static sites is that they don't give you a blank page if you don't return cookies, as that Fediverse site to which you linked does. </snark>

One thing you're missing in your process is disaster recovery: what do you do when the server's disk dies, or someone just accidentally runs rm on the wrong file?

I've found the easiest solution to this to be Git: everything I do that involves more than a few minutes of effort goes into a repo. (git init && git add . && git commit -m "blah blah blah") If it's just a local experiment, that repo may never even have a second copy but be used only for local undo. If it's something I want to protect against the loss of a host, I just add a remote and push it regularly. (It doesn't have to be on a special server for this such as Gitolite or GitHub; you can just make a directory in a Dropbox folder, git init --bare it, and push to that.) If you've got a shared central repo and several different users editing from their own computers, your DR problem is pretty near completely solved, since no matter what happens someone is likely still have a clone of the repo somewhere.

This now gives you preview and workflow, too, if you want it, by using Git in the usual way. I'm constantly being told that this can't possibly work for "normal users," but I've successfully taught, without much effort, people who can barely use Excel or Word to be able to pull, change things, commit and push with SVN and Git.

Personally, these days I would almost invariably use a site builder of some sort, though if I were worried about maintenance I would take to heart your comments on aged software. The last thing I want when having to rebuild the site on a new server in ten years is to be faced with NPM telling me that dependencies don't exist, this old code doesn't work on this new version of Node, or whatever. What would be really cool is to have a fairly self-contained program in a common language such as Python that maintains good backward-compatibility, and then just drop that into the repo. As well as rendering some sort of markup into HTML, it might even be able to automatically generate links, lists of pages, and so on. Surely someone must have written something like this already.

(The build procedure for this can be pretty simple: run the build script, which is also in the repo, and it puts its output in a docroot directory, also in the repo, and you just commit the new source and the built version at the same time. The deploy procedure is then just git pull on the server.)

Right now I myself am dealing with the move of a site on a very old WikiMedia server that is going away, and anybody who tells you how simple this is is spouting nonsense. Sure, there might be a few situations where this goes easily, but there are just too many problems that can crop up. "Since the guy with access to the DB isn't available, just use the API to grab the page source," I'm told. Except that the API was disabled eight years ago due to security holes in it. Etc. etc.

Yes, they have.

All snark and no value. How has the upgrade process gone for you, for how many years?

I've had similar problems using wiki engines. Many of them require lots of setup and maintenance, making them wholly unsuited to projects that don't have multiple spare people on standby to fix things. Mediawiki especially.

I too setup static HTML pages and sites for people. Often I explain this as a quick and reliable stop-gap measure, but because nothing breaks people often stay happy and use them for years. I had one person learn to edit his HTML page, and that kept him very happy.

It's sad that there's a culture of "everything has to be webapp" these days. At my uni I spend a lot of time trying to help and teach students that are trying to write websites with various frameworks, but don't understand HTML or CSS. It's always interesting learning their view of how things work, such as templates being things that are sent to the user's browser and static files being things that are harder to serve than dynamic pages. Looking at some of the popular frameworks (such as python's flask) I can actually understand how they come to these conclusions.

Shameless self-post: yesterday I released Minisleep, a wiki engine designed around avoiding a lot of the problems I've seen in other wikis. Pages are kept as static HTML and there's even an in-built HTML WYSIWYG editor if you want to use it. If the engine breaks then all of the HTML pages stay online. One CGI script handles editing the pages, no dependencies other than BASH and a CGI-supporting webserver.

http://halestrom.net/darksleep/software/minisleep/

Should you use it instead of directly writing .html pages? Hell no. Hand-writing static HTML beats the socks off Minisleep's attempts at being portable and sane. Only use it if you want a wiki and are looking for some good alternatives to the big players.

Written on 24 April 2019.
« Go 2 Generics: The usefulness of requiring minimal contracts
How we're making updated versions of a file rapidly visible on our Linux NFS clients »

Page tools: View Source, View Normal, Add Comment.
Search:
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Wed Apr 24 00:00:00 2019
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.