hugo

if you want to get into blogging, want to retain control of your content, and don’t want to write something from scratch, a static site generator is a pretty good way to get this done.

there have been many static site generators in the history of the web, one of the first being dreamweaver for flash, but the current best of class is jekyll. the model established by jekyll is a collection of markdown files, a series of templates, and bit of code to plumb it all together. jekyll is implemented in ruby, and as such wants various ruby package management software to be installed for it to work. as you may have intuited from the fact that this post is not titled “jekyll”, I do not use it.

hugo

I use hugo to generate this site. the basic idea is the same as jekyll. I write a post like this as a markdown file using visual studio code, but you can use any text editor, with or without explicit support for markdown. then I use the hugo binary to generate the site. the binary is run in a specific subfolder that contains all the configurations and resources for the site. once this is completed, there’s a folder called “public” that contains the generated html, css, image, and other files that make up the site. then I push this folder up to an S3 bucket from which the site is served.

don’t be alarmed at the S3/AWS step. it’s very cheap to host these sort of light weight files in S3, plus you don’t need an EC2 instance, AWS (GCP as well) has static file hosting as a service, so you can do some configuration of S3 and get seriously low cost file hosting. there’s also a nice free CDN available called cloudfront that will improve the performance of your site greatly.

AWS are not value neutral infrastructure providers, and we should not trust them with something so valuable as our access to speech, political or otherwise. so while I am happy to take advantage of their cheap hosting to get my site out there and accessible to many, I have a plan B.

IPFS and web3

I have described previously a way to publish a single page site generated from a markdown file. indeed samizdat is a very narrowly defined, mostly do-it-yourself static site generator, which is meant to be hosted from IPFS.

the key difference between generating html is relative vs absolute paths. for example, if I wanted to link to my 100 books page from this post, I could produce a link like:

https://plantimals.org/100

but such a link supposes http and domain names. the same link could also be structured:

/100

a nice feature of absolute paths is that users can copy and paste them anywhere and they will work without further interpretation. where, at best, the relative links are useless outside the context of the page, and potentially worse, they are converted to absolute links to the current version of the site, with the specific IPFS hash:

ipfs://bafybeidakw63l7rfao3lud54r5swl55upmq7dc4b6nat2biyvm6p7vhtfy/100/

the above link is to a specific unixfs IPLD object in IPFS, with the path “/100” appended on the end. initially this works well. the problem is that if you ever make changes, people who have linked to this exact hash will not see them.

this is the blessing and curse of IPFS. you always get exactly what you ask for. those familiar with the twilight zone will recognize the potential for great hubris. fortunately, there is another option.

ipns://plantimals.org/100

this does the trick. IPNS allows the publisher to share the unchanging address of a pointer, which can itself be updated as the site is updated. that way the underlying data can be updated, and if the consumer wants to know exactly which hash they are consuming it is easy to obtain, but they are not required to do so. there are still some rough edges here, but it largely solves the problem. we get immutability of the underlying data, along with intelligible updates.

in order to make this work for plantimals.org, I created two separate config.toml files, one with https://plantimals.org as the absolute path, and one set to use relative paths. then the absolute path compilation of the site gets pushed to S3 and the relative links get pushed into IPFS and the IPNS pointer updated automatically.

if you are curious about this process or unclear about what I’m doing, please contact me on twitter and ask.

*the ipfs:// and ipns:// protocols will only work if you have IPFS enabled in your browser, either via plugin or native support(on Brave for example)


web3

775 Words

2021-05-02 19:00 -0500