When class-based React beats Hooks

As much as I love exploring and using weird tech for personal projects, I’m actually very conservative when it comes to using new tech in production. Yet I was an immediate, strong proponent of React Hooks the second they came out. Before Hooks, React really had two fundamentally different ways to write components: class-based, with arbitrary amounts of state; or pure components, done as simple functions, with zero state. That could be fine, but the absolutely rigid split between the two was a problem: even an almost entirely pure component that had merely one little tiny bit of persistent state—you know, rare stuff like a checkbox—meant you had to use the heavyweight class-based component paradigm. So in most projects, after awhile, pretty much everyone just defaulted to class-based components. Why go the lightweight route if you know you’ll have to rewrite it in the end, anyway?

Hooks promised a way out that was deeply enticing: functional components could now be the default, and state could be cleanly added to them as-needed, without rewriting them in a class-based style. From a purist perspective, this was awesome, because JavaScript profoundly does not really want to have classes; and form a maintenance perspective, this meant we could shift functional-components—which are much easier to test and debug than components with complex state, and honestly quite common—back to the forefront, without having the threat of a full rewrite dangling over our heads.

I was able to convince my coworkers at Bakpax to adopt Hooks very quickly, and we used them successfully in the new, much richer content model that we launched a month ago. But from the get-go, one hook made me nervous: useReducer. It somehow felt incredibly heavyweight, like Redux was trying to creep into the app. It seemed to me like a tacit admission that Hooks couldn’t handle everything.

The thing is, useReducer is actually awesome: the reducer can easily be stored outside the component and even dependency-injected, giving you a great way to centralize all state transforms in a testable way, while the component itself stays pure. Complex state for complex components became simple, and actually fit into Hooks just fine. After some experimentation, small state in display components could be a useState or two, while complex state in state-only components could be useReducer, and everyone went home happy. I’d been entirely wrong to be afraid of it.

No, it was useEffect that should’ve frightened me.

A goto for React

If you walk into React Hooks with the expectation that Hooks must fully replace all use cases of class-based components, then you hit a problem. React’s class-based components can respond to life-cycle events—such as being mounted, being unmounted, and getting new props—that are necessary to implement certain behaviors, such as altering global values (e.g., history.pushState, or window.scrollTo), in a reasonable way. React Hooks, out-of-the-box, would seem to forbid that, specifically because they try to get very close to making state-based components look like pure components, where any effects would be entirely local.

For that reason, Hooks also provides an odd-one-out hook, called useEffect. useEffect gets around Hooks limitations by basically giving you a way to execute arbitrary code in your functional component whenever you want: every render, every so many milliseconds, on mounts, on prop updates, whatever. Congratulations: you’re back to full class-based power.

The problem is that, just seeing that a component has a useEffect1 gives you no idea what it’s trying to do. Is the effect going to be local, or global? Is it responding to a life-cycle event, such as a component mount or unmount, or is it “merely” escaping Hooks for a brief second to run a network request or the like? This information was a lot easier to quickly reason about in class-based components, even if only by inference: seeing componentWillReceiveProps and componentWillMount get overrides, but componentWillUnmount left alone, gives me a really good idea that the component is just memoizing something, rather than mutating global state.

That’s a lot trickier to quickly infer with useEffect: you really need to check everything listed in its dependency list, see what those values are doing, and track it up recursively, to come up with your own answer of what life-cycle events useEffect is actually handling. And this can be error-prone not only on the read, but also on the write: since you, not React, supply the dependency chain, it’s extremely easy to omit a variable that you actually want to depend on, or to list one you don’t care about. As a result, you get a component that either doesn’t fire enough, or fires way too often. And figuring out why can sometimes be an exercise in frustration: sure, you can put in a breakpoint, but even then, just trying to grok which dependency has actually changed from React’s perspective can be enormously error-pone in a language where both value identity and pointer identity apply in different contexts.

I suspect that the React team intended useEffect to only serve as the foundation for higher-level Hooks, with things like useMemo or useCallback serving as examples of higher-level Hooks. And those higher-level Hooks will I think be fine, once there’s a standard collection of them, because I’ll know that I can just grep for, I dunno, useHistory to figure out why the pushState has gone wonky. But as things stand today, the anemic collection of useEffect-based hooks in React proper means that reaching for useEffect directly is all too common in real-world React projects I’ve seen—and when useEffect is used used in the raw, in a component, in place of explicit life-cycle events? At the end of the day, it just doesn’t feel worth it.

The compromise (for now)

What we’ve ended up doing at Bakpax is pretty straightforward: Hooks are great. Use them when it makes sense. Even complex state can stay in Hooks via useReducer. But the second we genuinely need to start dealing with life-cycle events, we go back to a class-based component. That means, in general, anything that talks to the network, has timers, plays with React Portals, or alters global variables ends up being class-based, but it can in certain places even bring certain animation effects or the like back to the class-based model. We do still have plenty of hooks in new code, but this compromise has resulted in quite a few components either staying class-based, or even migrating to a class-based design, and I feel as if it’s improved readability.

I’m a bit torn on what I really want to see going forward. In theory, simply shipping a lot more example hooks based on useEffect, whether as an official third-party library list or as an official package from the React team, would probably allow us to avoid more of our class-based components. But I also wonder if the problem is really that Hooks simply should not be the only abstraction in React for state. It’s entirely possible that class-based components, with their explicit life-cycle, simply work better than useEffect for certain classes of problems, and that Hooks trying to cover both cases is a misstep.

At any rate, for the moment, class-based components are going to continue to have a place when I write React, and Bakpax allowing both to live side-by-side in our codebase seems like the best path forward for now.

  1. And its sibling, useLayoutEffect. ↩︎

Falsehoods Programmers Believe About Cats

Inspired by Falsehoods Programmers Believe About Dogs, I thought it would be great to offer you falsehoods programmers believe about mankind’s other best friend. But since I don’t know what that is, here’s instead a version about cats.

  1. Cats would never eat your face.
  2. Cats would never eat your face while you were alive.1
  3. Okay, cats would sometimes eat your face while you’re alive, but my cat absolutely would not.
  4. Okay, fine. At least I will never run out of cat food.
  5. You’re kidding me.
  6. There will be a time when your cat knows enough not to vomit on your computer.
  7. There will be a time when your cat cares enough not to vomit on your computer.
  8. At the very least, if your cat begins to vomit on your computer and you try to move it to another location, your cat will allow you to do so.
  9. When your cat refuses to move, it will at least not manage to claw your arm surprisingly severely while actively vomiting.
  10. Okay, but at least they won’t attempt to chew the power cord while vomiting and clawing your hand, resulting in both of you getting an electric shock.
  11. …how the hell are you even alive?2
  12. Cats enjoy belly rubs.
  13. Some cats enjoy belly rubs.
  14. Cats reliably enjoy being petted.
  15. Cats will reliably tell you when they no longer enjoying being petted.
  16. Cats who trust their owners will leave suddenly when they’re done being petted, but at least never cause you massive blood loss.
  17. Given all of the above, you should never adopt cats.
  18. You are insane.

Happy ten years in your forever home, my two scruffy kitties. Here’s to ten more.

  1. Here, ask Dewey, he knows more about it than I do. ↩︎

  2. Because, while my cat has absolutely eaten through a power cord, this is an exaggeration. The getting scratched while trying to get my cat not to puke on a computer I was actively using happened at a different time from the power cord incident. Although this doesn’t answer the question how she is alive. ↩︎

The Death of Edge

Edge is dead. Yes, its shell will continue, but its rendering engine is dead, which throws Edge into the also-ran pile of WebKit/Blink wrappers. And no, I’m not thrilled. Ignoring anything else, I think EdgeHTML was a solid rendering engine, and I wish it had survived because I do believe diversity is good for the web. But I’m not nearly as upset as lots of other pundits I’m seeing, and I was trying to figure out why.

I think it’s because the other pundits are lamenting the death of some sort of utopia that never existed, whereas I’m looking at the diversity that actually exists in practice.

The people upset about Edge’s death, in general, are upset because they have this idea that the web is (at least in theory) a utopia, where anyone could write a web browser that conformed to the specs and (again, theoretically) dethrone the dominant engine. They know this hasn’t existed de facto for at least some time–the specs that now exist for the web are so complicated that only Mozilla, with literally hundreds of millions of dollars of donations, can meaningfully compete with Google–but it’s at least theoretically possible. The death of Edge means one less browser engine to push back against Chrome, and one more nail in the coffin of that not-ever-quite-here utopia.

Thing is, that’s the wrong dynamic.

The dynamic isn’t Gecko v. EdgeHTML v. Blink v. WebKit. It’s any engine v. native. That’s it. The rendering engine wars are largely over: while I hope that Gecko survives, and I do use Firefox as my daily driver, that’s largely irrelevant; Gecko has lost by at least as much as Mac OS Classic ever lost. What does matter is that most people access the web via mobile apps now. It’s not about whether you like that, or whether I like that, or whether it’s the ideal situation; that’s irrelevant. The simple fact is, most people use the web through apps, period. In that world, Gecko v. Blink v. WebKit is an implementation detail; what matters is the quality of mobile app you ship.

And in that world, the battle’s not over. Google agrees. You know how I know? Because they’re throwing a tremendous amount of effort at Flutter, which is basically a proprietary version of Electron that doesn’t even do desktop apps.1 That only makes sense if you’re looking past the rendering engine wars–and if already you control effectively all rendering engines, then that fight only matters if you think the rendering engine wars are already passé.

So EdgeHTML’s death is sad, but the counterbalance isn’t Gecko; it’s Cocoa Touch. And on that front, there’s still plenty of diversity. Here’s to the fight.

  1. Yeah, I know there’s an effort to make Flutter work on desktops. I also know that effort isn’t driven by Google, though. ↩︎

Messages, Google Chat, and Signal

Google is about to try, yet again, to compete with iMessages, this time by supporting RCS (the successor to SMS/MMS) in their native texting app. As in their previous attempts, their solution isn’t end-to-end encrypted—because honestly, with their business model, how could it be? And as with Google’s previous attempts to unseat a proprietary Apple technology, I’m sure they’ll tout openness: they’ll say that this is a carrier standard while iMessages isn’t, and attempt to use that to put pressure on Apple to support it—never mind the inferior security and privacy that make the open standard a woefully…erm, substandard choice.

So here’s my suggestion to Apple: you’ve got a good story going on right now that you have the more secure, more privacy-conscious platform. If you want to shut down Google’s iMessages competitors once and good, while simultaneously advancing your privacy story for your own customers, why not have iMessages use Signal when the recipient doesn’t have an iOS device? Existing Apple users would be unaffected, and could still leverage the full suite of iMessages features they’re used to. Meanwhile, Android customers on WhatsApp or Signal would suddenly have secure communication with their iOS brethren, not only helping protect Android users, but also helping protect your own iOS users. And you’d be doing all of this while simultaneously robbing Google of the kind of deep data harvesting that they find so valuable.

I doubt Apple will actually do this in iOS 12, but it’d be amazingly wonderful to see: a simultaneous business win for them, and a privacy win for both iOS and Android users. I’ll keep my fingers crossed.

Moving and backing up Google Moving Images

For reasons that I’ll save for another blog post, I decided recently to ditch pretty much the entire Apple ecosystem I’d been using for the last decade. That’s meant gradually transitioning from macOS to Ubuntu, and from iOS to Android. Of course, to ditch iOS for Android required a new phone; after some research, I opted for a Google Pixel 2.

The Pixel 2’s been a great phone and has lots of interesting features, but one of the more esoteric features is called Moving Images. These are Google’s take on Apple’s Live Photos: when you take a photo, a very small amount of video is also recorded, yielding a kind of Harry Potter-like effect. In general, I don’t honestly care all that much about the video bits of these, but every once and awhile, you capture a really unique moment by happenstance where a Live Photo or Moving Image is really special, and on those occasions, I’m incredibly thankful someone at Apple came up with this idea.

In general, I use Google Photos to manage my photo collection, in part because it hits a sweet spot on my convenience/safety metric: the web application and mobile clients are incredibly easy-to-use for day-to-day work, and keeping a local copy of all your photos is as trivial as clicking a checkbox in Google Drive and then downloading them with the Google Backup & Sync tool (or InSync or rclone on Linux). The ease of getting a local mirror of my Google Photos data is great not just for offline access, but also for both offsite backup (in case I ever lose access to my Google account) and trivial rich editing with The GIMP, Lightroom, darktable, Acorn, or any of the other heavier-duty photo editors when I want to. It’s genuinely been one of the better cloud/local hybrids I’ve used.

I was very happy with this setup until just a few days ago, when I made an annoying discovery: Moving Images are very difficult to back up. In fact, the only way I ultimately managed to get everything automatically backed up was to use a tool not from Google, but from Microsoft.

The lost 110 photographs

I wouldn’t honestly have even noticed there was a problem in the first place except that I realized that Backup & Sync failed for exactly 110 files—on all of my machines. macOS, Windows, whatever, didn’t matter, those 110 files wouldn’t download. I could click “Retry All,” I could reinstall Backup & Sync, I could even utterly remove all the downloaded data and retry from absolute scratch, but those 110 files refused to budge. Google is Google, so there was no way for me to really reach out and get genuine tech support,1 but I did poke through their forums. And promptly felt my heart drop as I found three things very quickly:

  1. I was hardly the only one with this issue.
  2. The Google Drive team would move posts on this topic to the Google Photos forums, and the Google Photos team would move them to the Google Drive forums, because each team generally said it was the other’s problem. As far as I could tell, no matter which forum ultimately ended up being the thread’s home, nothing was resolved (see e.g. this thread, which ended up in the Drive forum).
  3. Many of the affected users mentioned Pixel phones.

This caused me to look at whether there was a pattern to what wasn’t getting downloaded, and I spotted the issue instantly: all 110 files started with MVIMG, the prefix for Moving Images. At that point, I found that there had been topics going back months about Moving Images not syncing properly (e.g. this post from early January). But the good news was that multiple people were saying that newer Moving Images were backing up properly, and it was trivial for me to verify that, indeed, more recent Moving Images I’d taken had downloaded, and some spot-checks showed happy little JPEGs all right where I wanted them to be on my local disk.

Okay, I thought to myself. That stinks, but it’s just those 110 photos; new ones are downloading just fine. So, worst-case, you download 110 photos by hand. Not the end of the world.

I went to sleep and didn’t think more about it.

The “moving” part of Moving Images is optional

It wasn’t until the next morning that I realized something was wrong. When I’d spot-checked more recent Moving Images to verify they had backed up, I of course didn’t actually check on the actual “Moving” part of the Moving Image; while Moving Images are technically JPEGs, the video is stored in such a way that nothing I’ve got can (currently) see it. That didn’t faze me too much, mind—changes were overwhelmingly high that someone else would reverse-engineer the format, and failing that, the chance the thing was just an MPEG concatenated to, or stored inside, a JPEG was extremely high. That’s well inside the realm of things I’ve reverse-engineered in the past. But it did mean that I hadn’t explicitly verified whether a video stream was present.

Over breakfast, a little detail I’d missed finally registered: the files were just too damn small. The Pixel 2 has a 12 megapixel camera. Photos it takes, even with really good compression, really ought to be at least a couple megabytes by themselves; throw in video, and they should be at least 6-10 MB. Yet every file I was looking at was, tops, in the 4 to 5 MB range. That was simply insufficient to store both a high-resolution photo, and a video stream. Something was up.

I picked one of the Moving Images at random. On my Pixel 2, and on the Google Photos website, it showed up as 6.4 MB; my local copy was only 3.4 MB. Another Moving Image showed the same pattern: 7.2 MB on Photos and on my phone, but only 3.7 MB locally. Indeed, a quick sanity check seemed to reveal that all the Moving Images had suffered the same fate. And it wasn’t local to just the official Backup & Sync tool, either: InSync and rclone both showed the exact same behavior, too. Yet downloading the pictures manually from the Google Photos website gave the original, larger image. The only conclusion I could reach: the Google Drive service itself was stripping out the Moving part of the Moving Image.2

API? What API?

My first thought was I’d just write my own backup client. After all, while the Drive integration was nice, all I really wanted was automatic offsite backup. While writing something myself wasn’t quite my first pick, I didn’t anticipate it’d be that hard, and since I could download the full, untrimmed files from the Photos website, I knew the raw files existed; it was just a matter of using the proper Google Photos API.

Except…well, there is no Photos API, as far as I can tell. The Picasa Web Albums API has been deprecated since Picasa sunset in 2016, and Google doesn’t list a Photos API anywhere on its developer portal. In other words, the Drive API seemed to be the only official way to go. But I knew from InSync and rclone that the Drive API was exactly where the problem lay in the first place.

Okay, back to the drawing board.

Backup backup options

The second idea I had was to try another photo synchronization service. The raw data was obviously on the phone; I just needed something that could get them off. My first stop was Dropbox: I’d used it for years previously, I knew they had a nice Linux client, and I still used it actively.

Dropbox completely failed here, on two levels: first, it suffered the same trimming issue Google Photos did, so in a narrow sense, it obviously didn’t solve my problem. No biggie.

But Dropbox also failed because it has become downright slimy when it comes to letting you downgrade your account. When I was in Dropbox, I realized I’d fallen below the storage threshold for a free account, so I decided to cancel my paid membership. Dropbox made this incredibly difficult: first, when you click on “Change Plan,” your only option is to upgrade; there is no way to downgrade. You instead have to scroll to the very bottom of the window and click a tiny “Cancel” link. After that, you then have to choose to cancel three or four more times, being interrupted to be told why leaving’s a bad idea on screens where the default button keeps alternating between the “continuing closing my account” option and the “haha no actually I totally want to keep my account, thank you for asking” choice. It took me a couple of tries before I finally extricated myself. Never again, Dropbox. If you have to play that dirty to keep customers, then I’m definitely not sending any business your way.

My next thought was to see if someone had written a photo uploader for Upspin, but they haven’t, and that’s considerably more time than I’ve got right now, so that was it for that idea. I also thought about using Perkeep, since that does have an Android photo uploader, but my Perkeep installation is behind my firewall, and AT&T’s modem prevents my old OpenIKED-based VPN setup from working, so that route was also out.

The final tool I reached for before giving up was Microsoft OneDrive, and I was pleasantly surprised to find that OneDrive just worked. As far as I can tell, OneDrive uploads the unaltered original files, verbatim; if I copy the raw file off my phone via USB, the hashes match.

That said, while I have had very good experiences with OneDrive in the past, simply moving to OneDrive isn’t really an option for me right now: my family all heavily use Google Photos, and we make extensive use of shared albums. Getting everyone moved onto a new service just isn’t feasible, so I was going to have to find a way to make both OneDrive and Google Photos play together somehow.

Time for a short shell script.

The “solution”

I ended up putting together a process that is very gross, but does work: first, I have both Google Drive (via rclone) and OneDrive (via the excellent open-source onedrive client) syncing locally. I create a copy of the Google Photos folder structure in a different location, and then hardlink all of the photos from the InSync folder to the copy. Next, I look for any photos in the copy whose name start with MVIMG_. For each photo I find, I look for a corresponding, larger file in the Microsoft OneDrive camera roll, and, if I find one, move that image over to the new folder structure in place of the Google Drive one.

It’s not ideal, and the resulting Ruby script is not exactly the best code I’ve ever written, but it does work.

Moving forward

Currently, I’m in an unhappy place: I’m generally still using Google Photos, but I’ve also got camera shots going to OneDrive, and I have a gross Ruby script that tries to sanitize this mess. Further, I’m not actually fully confident that these larger files do in fact have the video information I need; I’ll need to learn more about the JPEG file format to figure out if my hunch is correct—and if so, to figure out how to extract the data.

Meanwhile, I’m going to hope that Google either just makes an API for doing this, or otherwise, fixes the Drive API to allow fetching the original files. But at least I don’t have to worry about losing any raw data in the meantime.

  1. This is, strictly speaking, in my particular case, a lie; I know enough people at Google that I can usually just play a game of telephone until I find someone who both works on a relevant team and cares enough to help resolve my problem. But a) normal people cannot do this, and b) this actually was not helpful this time around. ↩︎

  2. To be clear here, it’s possible that’s not quite what’s happening; it’s tricky for me to tell, since I haven’t yet reverse-engineered the file, and Google hasn’t (as far as I can tell) documented what they’re doing. But Photos/Drive editing the file between my phone and my machine means regardless that it’s not trustworthy as a backup option. ↩︎

Commit SHAs as dates

I’ve been going through a pile of old bitquabit posts. While many of them hold up over time, the more technical ones frequently don’t: even when I was lucky and happened to get every technical detail right, and every technical recommendation I threw out held up over time (hint: this basically never happens), they were written for a time that, usually, has passed. Best practices for Mercurial in 2008 are very much not best practices now. But it’s a bit tricky: whether something I wrote is genuinely out-of-date has less to do with how much raw time has passed, than how much churn in the project has happened.

To that end, I was happy to see that some of the blogs I follow have started using Git commit SHAs to date their post, alongside the calendrical date—serving as a kind of vector clock for the passionate. If you’re writing technical posts for an open-source project, this seems ideal to me: for casual observers, they can go with the calendrical date, and for people deeply involved in that arena or project, they can instead key off what has happened since the commit in question.

I’m not going to retrofit all my old posts, but it’s something I’ll keep in mind going forward.

Automating Hugo Deployments with Bitbucket Pipelines

As I mentioned in a recent post, I manage my blog using a static site generator. While this is great to a point—static site generators can handle effectively infinite traffic, they’re stupidly cheap to run, and I can use whatever editor I feel like—the downside is that I lose tons of features I used to have with dynamic blog engines. For example, while it’s almost true that I can use any editor I want, I don’t have a web-hosted editor like I would in WordPress or MovableType, and I likewise can’t trivially add any sort of dynamic content. Most of what I lose I can live without, but one that is genuinely annoying, and which has even bitten me in the past, is that I can’t publish without being on a computer that has both my SSH keys, and the publishing toolchain installed. Not only is that inconvenient; it means that publishing output can vary depending on which machine I use for a given publishing run.1

There’s a pretty easy fix for that: add continuous deployment. If it’s good enough for real software, it’s good enough for a personal blog. I can set up a single, consistent deployment environment on some server, drive all the deploys through that, and call it a day. The problems here being that a) setting up a continuous integration server is annoying, and b) I am lazy. There are cloud-hosted CI servers, but most of them either are overly complex, or are too expensive for me to justify using for my personal blog.

Enter Bitbucket. I’m already using them, since they’re by far and away the best Mercurial hosting game in town these days, and they recently2 added a new feature called Bitbucket Pipelines that fits all my requirements: cloud-hosted, free, easy-to-use, cheap, and it didn’t cost anything.3

And I’m glad I looked, because getting everything running turned out to be stupidly easy.

Step one: write the Dockerfile

Bitbucket Pipelines wants to base your deployment on a Docker image, so I had to write one. Thankfully, it’s so easy to make Docker images these days that pretty much everyone is making them—even when there is no conceivable reason why they should. So let’s set one up.

To deploy my blog, I need at least four things: Hugo, Pygments, rsync, and SSH. It took me a couple tries to get the Dockerfile just right (mostly because I straight-up forgot rsync and SSH on the first go), but the result is literally five lines, total:

FROM alpine:3.6

RUN apk add --no-cache bash git go libc-dev python py2-pip rsync openssh-client
RUN pip install pygments
RUN go get -u github.com/gohugoio/hugo

About the only thing remotely interesting here is that I’m using Alpine Linux, which I selected based on it seemed to be what the cool kids were using these days and it was one of the smallest base Docker images I could find. I’m not honestly sure if bash is needed (I suspect /bin/sh would’ve been just fine), but I originally wrote my deployment script for bash, and I’m too lazy to figure out if I used any bashisms, so let’s just toss that in there anyway. What’s a paltry 34 MB between friends?

Tons of places host Docker images for free these days, and Bitbucket can use any of them; I kept it simple and pushed it to my Docker Hub account.4

Step two: write the build script

I actually already had a build script,5 so all I really had to do was tweak it slightly to be run on something other than my personal machine. The result’s genuinely not interesting, but for completeness, the functional part of it looks like this:


# Normal boilerplate (see e.g. https://sipb.mit.edu/doc/safe-shell/)
set -euo pipefail

# Add $GOPATH to the path so Hugo will be present
export PATH=$(go env GOPATH)/bin:$PATH
hugo --cleanDestinationDir
rsync -av --delete public/ publisher@bitquabit.com:/var/www/blag/

Again, nothing interesting here. We’re at exactly ten lines, and even that only because I added some comments and some blank lines for readability. I called this file build and stored it unceremoniously in the root of my blog repository.

Step three: test it…if you feel like it

Since we’re going to deploy files to a real server in an automated fashion, the next step is to test everything.

Or not. It’s your server; I’m not gonna tell you what to do.

Myself, I decided to half-ass it a bit. Pipelines just launches your Docker image, copies your project into the container, sets your project to be the current directory, and begins running your script. I can do that:

$ docker run -it --volume=C:/Users/b/src/blag:/blag --entrypoint=/bin/bash bpollack/blag-builder:latest
$ cd /blag
$ ./build

The first line says to run a Docker container we built interactively (-i) on my terminal (-t), mount the Windows directory C:\Users\b\src\blag at /blag in the container, and then launch bash once the container is ready. In the next two lines, I demonstrate my amazing CS skills to change to the appropriate directory and run the script, proving that, even in this advanced day and age, I can still play the part of a computer.

This of course failed at the push step due to SSH keys not being set up (more on that in a second!), but otherwise seemed to work fine, so it’s good enough for me. Onwards!

Step four: create the pipeline

The pipeline spec is really simple: you give it a Docker image (which we just made), a condition of when to run (I’ll just have it run whenever there’s a new changeset, which is the default), and what steps to run when the condition is met (in our case, we need to run one single step, which is the build script we just wrote). So that file, in its entirety, is:

image: bpollack/blag-builder:latest

    - step:
          - ./build

Granted: being Yaml, this looks like the result of an editor with broken indentation rules. But it’s at least pretty self-explanatory: we give it a Docker image (it defaults to using Docker Hub, which is great, because so did we), we give it one pipeline, called default, and give it the sole job of running a one-line script that calls our real build script, which we wrote together in the previous heading after much struggle. Commit this as a file called bitbucket-pipelines.yml in the root of your repository and push.

Step five: add relevant SSH keys

Congratulations! If you did everything perfectly at this point, Bitbucket will create your pipeline, run the build, and it will fail!…because you don’t allow random people to push stuff to your server over SSH.6 Fair enough. For reasons I’m not honestly entirely clear about, Bitbucket won’t let you specify SSH keys to use for Pipelines until at least one pipeline exists. But now that we’ve got a pipeline—it’s the one that just failed—you’re good.

In your repository, click on the Settings tab, and then, under the Pipelines heading, there’s an entry called SSH Keys. Still with me? Good. These are SSH keys that will be loaded into your Docker container right before your script runs, and which will be used to push code to your server. I recommend following their advice, generating a key with them, and then adding that key to the ~/.ssh/authorized_keys file in the appropriate user account. You’ll also need to tell it what servers you’ll be using these keys with so that Bitbucket will detect if your server gets swapped out and can avoid deploying your precious secrets to some nefarious machine.

(Incidentally, I recommend using those Bitbucket keys only with a heavily locked-down account that’s dedicated purely to handling the deploy, but how to do that is a bit outside the scope of this particular post.)

Step six: you were actually done at step five

That’s it; we’re done. You do need to either re-run the pipeline manually at this point or push a dummy changeset to make sure, but everything should honestly Just Work™.

That’s honestly it; a hair over twenty lines of code got you free continuous delivery. You can get more fancy at this point if you’d like (I’m probably going to make sure the pipeline runs only when certain bookmarks are moved, rather than on every push, for example), but that’s the fundamentals. Three short files, each ten lines or less.

  1. I briefly had what I guess could qualify as an outage when I accidentally ran a deploy on a machine that didn’t have Pygments installed—which promptly deleted every single code snippet on the site. Oops. ↩︎

  2. Relatively speaking; the feature went into beta in March 2016. ↩︎

  3. It’s not free-free, but you get 50 minutes of build time with the free account, and building my blog with Pipelines takes about 16 to 25 seconds, so I figure I’ll be fine for awhile. ↩︎

  4. I won’t stop you from using this image, but I really discourage you from doing so; I make zero guarantees I won’t do horrible things to it in the future. ↩︎

  5. Two, actually—one for Windows and one for Unix—but since the Windows Subsystem for Linux has stabilized, all the Windows one does is call the Unix one. ↩︎

  6. I sincerely hope. ↩︎

The Paradox of Apple Watch

When the Apple Watch first came out, my initial reaction was basically disgust. Everywhere I looked, I saw people already Krazy Glued to their phones, missing the world around them to live instead in the small mini-Matrix in their pocket. Now, Apple was proposing to add additional distractions right on our wrist, making it even easier to ignore real life and stay focused on a screen instead. Not only was the Apple Watch not for me; it was a sad commentary on how tech was ruining our lives.

Yet I kept seeing more and more friends of mine falling victim to the Apple Watch. They insisted it was actually great, that I was the crazy one, that it was the next revolution in tech, that they loved how it kept them in touch with everyone even more easily, etc., etc., etc. I’ve heard this song before, and while I doubted I’d agree, it became equally obvious that the Apple Watch wasn’t going anywhere. In the interest of making sure I could stay not just with it, but also hip, I bought one a few weeks ago. I figured I’d play with it for a couple weeks and return it, getting a nice blog post out of it about how I was right and the Apple Watch made my life worse.

But what I’ve instead found something else: properly used, at least for me, the Apple Watch isn’t yet another distraction. Instead, it can allow me to stay informed, without constantly pulling me out of the moment. It’s actually freed me to leave my desk much more easily, without succumbing to staring at my phone instead. In other words, it’s had the exact opposite effect I anticipated.

The Problem with iPhone Notifications

Here’s my basic problem: I’m a manager. I have twelve direct reports spread across four disparate projects, plus I also provide management support to our Infrastructure project—you know, the one project at Khan Academy where even we have alerting and chatbots and whatnot to let you know when things have exploded. This means I have meetings constantly, and I’m pinged on Slack constantly, and I get an obscene volume of email. And each and every one of these constantly wants your attention, by default sending tons of notifications basically all over the place. Phone, computer, tablet, cyborg sitting next to you muttering about killing all humans, everywhere.

Some of these distractions I can easily disable while still doing my job. For example, since emails rarely require an immediate response, I turned off mail notifications completely, and only bother checking messages every hour or so. That’s socially acceptable, and keeps me available while also letting me get work done. I likewise killed notifications from tools like Trello, OneNote, Asana, and anything else that almost certainly could wait for a regularly scheduled check-in.

But Slack and meetings are trickier: while many Slack notifications can genuinely wait, many can’t, so I do need to actually read the notifications and make a decision on whether to respond. (I actually just ranted about this in detail if you’re bored.) My meetings likewise frequently shift radically during the day, so the fact I had been clear at 11 doesn’t mean I still am, nor does the fact I originally had an interview at 2 mean I still do.

I thus fell into this pattern where I’d get a buzz from Slack, take my phone out, read the alert, realize I had a pile of unread messages in some room or other, read through those, get distracted paging in context for the conversation, remember to recheck my calendar for any meeting changes, put the phone away, forget what I was doing, and then repeat. My spouse grumpily noticed that even on date nights, even when I was trying to stay in the moment and wasn’t honestly thinking about work, even when my phone was in Do Not Disturb mode and couldn’t have buzzed, I’d still sometimes mechanically take my phone out, look at the screen, and put it right back—just because I was so used to doing that motion during the day that it had become a habitual reflex.1

In this environment, adding the Watch seemed like a bad idea. I’d already cut down my notifications as far as I could; putting them on my wrist seemed like it’d make an existing problem even worse.

So I was quite surprised when exactly the opposite happened.

Enter Apple Watch

Here’s the thing: the Watch can’t actually do all that much—at least not in the way a smartphone can. It ultimately really does three things very well, and everything else very poorly:

  1. It’s a great way to track my jogs. That’s not why I bought it, but it turns out it’s great at it, and I use this feature a lot.
  2. It is indeed very good at giving you notifications, usually along with a small handful of possible response actions, if applicable.
  3. It is also quite good at taking certain kinds of very quick voice commands—basically the same subset Siri already handles well on the iPhone.

That’s it. Doing anything other than these is generally somewhere between painful and a genuine farce. Yeah, Todoist and other task lists exist on the Watch, but they fit maybe two to three things on the screen at once; you’d have to be a masochist to enjoy it. There’s a similar story with note-taking apps, like OneNote: yes, the app exists, and it honestly does the best it can with voice entry, but that gets old really quickly. Tools like Maps and Yelp are so limited that I’m forced to wonder why anyone bothered in the first place. And trying to read something long-form like an email on the Watch…I mean, yes, you technically can, but you’d have to be really desperate. Indeed, any use that requires reading or generating a substantial amount of information is either impossible or so difficult that I avoid it at all costs.

And…that weirdly turns out to be perfect. Fine; I can’t avoid real-time Slack and calendar notifications and do my job effectively, so they’re just going to be part of my life for now. But when I get them on the Watch, I glance down, make a snap decision on whether it requires me to do anything, and then either go back to doing what I was doing immediately (the overwhelming majority of the time), or, if the notification does require an immediate response, I walk back to my actual computer to handle it appropriately. In mere days, my habit of pulling my iPhone out of my pocket basically evaporated. Not only that; because I already try very hard to separate my work and personal devices, and because I was now responding to anything long-form on my work PC rather than my phone, I basically obliterated all of my media grazing habits overnight.2

The actual impact has been obvious to me: my work velocity increased, my iPhone battery lasts disturbingly longer, and I find myself much better able to focus whether we’re talking 1:1s with coworkers, or personal time with friends and family. Plus, I can now actually take a nice midday walk without having to stop every two minutes to check my phone. It’s honestly been an incredible win.

Mindful(ish) Notifications

I’ve been making a very deliberate effort for the last six months to pursue what I’ve been calling mindful computing—basically, trying to use technologies and develop habits that discourage distractions and that encourage and reward getting onto a computing device to do some specific action, and then putting the device away when you’re done.

I cannot quite say that the Apple Watch fits cleanly into this rubric. Indeed, as I noted, notifications are both one of the things it does best, and the explicit reason I ended up keeping it—and I don’t know that any person who would argue that seeking out a distraction-making device is a good example of mindful computing as I defined it.

But I do think that, properly used, the Apple Watch can be mindful-ish. If you are in a situation where you genuinely cannot fully avoid having some form of distracting notifications and still be effective, the Watch, specifically due to its incredibly limited abilities, can actually be an amazing compromise.

It’s one of the few recent technology purchases where I can say with a straight face that it meaningfully improved my quality of life. And while it didn’t do so in a fundamental way, and it may not be for everyone, I am surprisingly happy that I ended up ignoring my initial judgment and taking the plunge.

  1. There’s a valid question here of why these are on my phone this way in the first place; after all, if I’m at my PC, I could put the notifications there. And in truth, when I am sitting at my desk, I usually put my phone into Do Not Disturb mode for this exact reason. But one of the nice things about being remote is I can frequently attend meetings while taking a walk, or read through some emails or documents in the nearby park—but if I do that, then I do in fact need all these notifications on my phone in case I need to switch up my plans/head back to the house/get back to my laptop. ↩︎

  2. The unexpectedly positive impact of suddenly not reading reddit, Twitter, and the like anymore is a great topic for another day. ↩︎

Why I Hate Slack and You Should Too

Yeah, that’s right: there’s finally something I feel so negatively about that I’m unsatisfied hating it all by myself; I want you to hate it, too. So let’s talk about why Slack is destroying your life, piece by piece, and why you should get rid of it immediately before its trail of destruction widens any further—in other words, while you still have time to stop the deluge of mindless addiction that it’s already staple-gunned to your life.

1. It encourages use for both time-sensitive and time-insensitive communication

A Long Thyme Agoe, in the Days Before Slack, I had three different ways of being contacted, and they served three very different purposes, with radically different interrupt priorities. I had emails, which could wait; I had phone calls, which couldn’t; and I had the company IRC server, which was usually where I went to waste time by sharing links to things that either made me get very angry or made me laugh hysterically.1 In this system, the important, time-sensitive thing can interrupt me, and everything else can’t. That’s great for productivity and great for my sanity, and the people were happy and things were good.

Slack totally just trashed everything. It’s email and phone calls and cat pictures, all rolled into one. So sometimes Slack notifications are totally not time-sensitive (@here Hey I need coloring books for my niece, any suggestions? also she’s afraid of animals clowns food people and dinosaurs and also allergic to paper kthxbye!), and sometimes they require an immediate action (@here Dr. Poison just showed up and tl;dr maybe run for it idk?)—and until I’ve read the message, I have absolutely no idea whether it deserves my immediate attention. That order’s backwards and it makes me feel bad because it is bad.

This is actually a whole thing in psychology: if you give a mouse food every time they push a lever, they’ll eventually only push it when they’re hungry, but if you only give them food sometimes when they push a lever, then the “reward uncertainty” will actually cause them to push the lever more often.2 And hey! Here we are, all checking Slack 23,598 times a minute for each notification, because who knows, maybe this one matters. It’s all the pain of Vegas with none of the reward and somehow we’re still hooked.

So unlike before, now I get interrupted constantly, and I have to break my flow to figure out whether getting interrupted was worthwhile, and for some reason this is supposed to enhance business productivity.

Right. Sure. You go on being you, Slack.

2. It cannot be sanely ignored

“Okay, pea-brain," you mutter, “so just turn off Slack notifications when you need to focus for awhile, and catch up later."

I once thought as you did, but part of the reason you end up addicted to Slack is that catching up on what you’ve missed feels very similar to when you were back in college and were a day before the final and suddenly realized that your plan of not highlighting the book or taking notes all semester may’ve been a Bad Idea™. About the only way Slack bothers grouping information is by room3—and as anyone who’s been trapped in a heavily-used Slack system can tell you, the room names and descriptions are at best weak guidelines, so you can’t even necessarily prioritize what to catch up on even at that gross level of granularity.4 Nope: your only option is going to be to read the entire backlog, from start to finish, or else just accept that, at some distant point three months from now, you’re going to look like a complete idiot when you’re the only one didn’t know that all employee blood was now going to be collected for occult purposes.5

Granted, this isn’t Slack’s fault per se, at least insofar as every chat system has this problem, but Slack’s attempt to become your One True Source of Everything, from scheduling to reminders to SharePoint replacement to company directory, means that a huge amount of information that previously would’ve been in emails ends up in Slack, and only in Slack. And that’s a very deliberate decision by Slack to make themselves utterly indispensable, so I feel very comfy screaming at them until I go hoarse.

3. It cannot be sanely organized

Okay fine, so you read through the whole backlog from your vacation, which took you barely even 70 hours, and have extracted the six actual to-do items from it, one of which involves something about pentagrams and goats that you’ll decipher later. Great. Mazel tov. Phase one complete.

Now what? Slack has no meaningful way to organize those six messages. There aren’t folders. There isn’t a meaningful “do later” pile. (There’s /remind, to be fair, but, as noted previously, that just generates more notifications, which we’re trying to avoid. Theoretically.) So you’re left with…what, exactly? Right-clicking on each individual message at the end of the chain, copying the link, and pasting that into some external to-do app? Which, of course, when you click back on the link, will require you to re-read at least some amount of unstructured backlog, including a bunch of unrelated garbage about reconfiguring CARP on the edge servers and something about epoll and multithreading and a panda birth video that just happens to be there, just to remind yourself what everyone said?

Welcome to hell. Population: all Slack users.

4. It’s proprietary and encourages lock-in

In an ideal world, I could circumvent a lot of these issues in any number of ways. For example, I’m still active in open-source sometimes, and the open-source equivalent of Slack is (usually) still IRC. But IRC, being a well-documented6 older system, has tons of different tools to extract data from it. If I want to be nerdy, I can yank individual messages from ERC straight into org mode, or write custom scripts for WeeChat, or use any of literally dozens of clients written in Ruby and Python and Io and Java and C# and thousands of other programming languages plus also JavaScript and do really bespoke things. And even if I don’t, the plethora of macOS and Windows clients means that an off-the-shelf or trivially customizable AppleScript or WSH solution is never far away.

But Slack is Slack, and Slack is Electron, and Electron is Chrome—Chrome surrounded by an unscriptable posterior that eats up 100 MB of RAM per channel, plus an extra 250 MB for each Giphy.7 And while I can almost script my way out of this hell, I really can’t. Not as a mortal end-user, anyway. To the extent I can do anything, I need to write directly against the Slack API, rather than using something commonplace like XMPP or IRC, so goodbye portability. And even if I’m willing and able to write against the proprietary API, a lot of the more interesting things you can do require being an organization admin, and require being enabled globally for the entire instance. So goodbye, personalized custom integration points, and hello, one-size-fits-zero webhooks. This is my life now.

5. Its version of Markdown is just broken

I’m going to use up an entire heading purely to say that making *foo* be bold and _foo_ be italic is covered in Leviticus 64:128 and explicitly punishable by stoning until death.

6. It encourages use for both business and personal applications

All this would be merely infuriating and drive me into a blind murderous rage if it were just something I dealt with at work, but oh no, now the fun groups I interact with are turning to Slack! That’s right: the same application and environment that makes a full-blown Dementor-style kiss with my attention span for work can now corner me in a back-alley when I just want to shoot the breeze with friends.

I glance at the Slack icon. I have nine unread messages. Neat. Are they from work? I should probably actually go read those and see which ones require I do something. Are they all the ex-employees of that one company I used to work for? It’s probably a bunch of political screaming about stochastically sentient Cheetos that somehow won the presidency, and I’m honestly a bit tired of reading about that at this point.8 But at any rate, I can’t know until I take my phone out and read the notification—and sometimes even then I can’t, since of course some of the people I talk to are on multiple Slack instances and have a habit of saying things like “@bmp did you look at this it’s really concerning?” which requires I actually load up the freaking client and find the instance and the message and finally learn to my utter horror that I shall never be given up, let down, or run around/deserted.

Give up and yield unto Cthulhu Slack, destroyer of focus

Stop using Slack. I hate it; you also should hate it. It’s distracting. It murders productivity. It destroys old tools. It exploits psychological needs in such a way that it kills your soul and hangs it up to dry over a lava pit, where the clothesline catches fire and your soul falls into the fire and somehow you’re not dead, just a zombie, forever, reading zombie notifications on your zombie iPhone and wondering whether “@here brains?” is a lunch invite or an insult until you read the backlog. Friends do not let friends use Slack. I have been utterly convincing and you should listen to me in my capacity as low-grade Internet celebrity and do what I say because mindlessly obeying authority is the right thing to do.

But realistically? We’re all still using Slack, because it’s there, and we have to, and it’s the best option according to our collective judgment, which I do have point out may empirically be lacking at this point. So if we are stuck in Slack, then maybe, just maybe, we could start trying to restore Slack to a place where it’s genuinely for ephemeral ideas. Where it’s indeed the place for ad hoc conversations, but not a canonical store for their conclusions and action items. Where I don’t have to read the backlog when I come back from vacation, because anything actionable will at worst have been duplicated as an email or a Trello card or what have you. Where I can disable Slack notifications because I can know, with certainty, that any activity can wait until I’m back at my computer and actually want to spend time chatting on Slack.

In the meantime I’ll be right back because either the data center just exploded or someone posted a picture of a goat fainting and The Notification God must be placated.

  1. This function is now provided by reddit. ↩︎

  2. Aziz Ansari, Modern Romance (New York: Penguin Books, 2015), 59. Yeah, I could’ve given you a scientific paper, but this book is way less boring and made me stupidly happy I’m not in the dating pool anymore. ↩︎

  3. Slack honestly is trying to address this with threads, but the problem, which anyone who tried using a system like Wave or Zulip or something similar could tell you, is that the origami crane of organizing information neatly by topic runs basically head-on to the rabid bull of real-time chat and then everything falls apart, so these don’t actually get used effectively in practice. Hell, whether a conversation uses a thread or not in Slack in the first place—and whether a threaded conversation stays that way in Slack (thanks, “Also send to #channel” checkbox! may the fleas of a thousand camels infest your armpits!)—seems sufficiently random that I’d be comfy using it as the main entropy source for a digital slot machine. ↩︎

  4. They’re trying really hard to address this recently with concepts such as “All Unreads” and (very recently) “Important Messages,” but while these certainly make catching up go faster, they don’t actually resolve the issue unless you really trust how Slack’s deciding what’s important. Based on my experience, we’re very much not there yet. ↩︎

  5. Didn’t you see it? It was in #kitten-pics. You were @here-messaged, so that’s on you. Now roll up your sleeve and welcome in the Lord of Darkness, His Holiness Spirit Agnew. ↩︎

  6. I mean…as far as that goes, anyway. ↩︎

  7. I genuinely have no idea if this scales by channels, but since I’m in ten channels and wasting 1.2 GB, I’d honestly prefer to assume it’s by channel, rather than the alternative that Slack needs a gig of RAM just to run. Which it probably does. But let’s assume. ↩︎

  8. Not because they’re wrong, mind. I just can only handle so much ranting about a human/toupée hybrid before I start to zone out. ↩︎

JSON Feed with Hugo

Every couple of years months [checks wristwatch] weeks, we reinvent a file format for no particularly good reason. Don’t get me wrong; we come up with all kinds of reasons to justify what we’re doing—easier to read, better for the environment, It’s Got Electrolytes™—and sometimes, the new format does genuinely represent a meaningful or necessary improvement. But more often than not, we’re just reinventing things out of boredom and a nagging sense, deep down, that if we don’t keep changing everything constantly, normal people may grok that most of the reason programming is complicated and weird is because we put a lot of effort into making it that way.

So I was pretty psyched when JSON Feed came on the scene a couple weeks ago, because it’s pretty much the absolute rawest possible example of a file format that’s unrepentantly change for the sake of change. Literally every language I interact with has perfectly good tools, right in the standard library, for generating and consuming RSS and Atom. Until a few weeks ago, none had any tools for working with JSON Feed whatsoever because it didn’t even exist. But since, and I quote from the JSON Feed manifesto, “developers will often go out of their way to avoid XML,"1 JSON Feed is now a thing, and we’ve already entered the phase where every language I use has a pile of third-party libraries for the format, most of which will be unsupported going forward, and all of which have interesting quirks and bugs that no one fully understands yet. I thus figured it was high time to support JSON Feed on bitquabit.

There was unfortunately a caveat. Some time ago, I moved my blog over to Hugo, a static site generator, so that I wouldn’t have to spend time maintaining my own blog software. In general, that’s been brilliant, but whereas it’d have taken me about five minutes to add JSON Feed to my old blog, I had no idea how to add it to a Hugo site. The highest-ranked link on Google is just vague enough to make me think I should get it but not be able to, and I can say in retrospect that Hugo’s documentation on alternate output formats makes a ton of sense after you already know what’s going on—but not before.

So without further ado, here’s how you add JSON Feed to a Hugo site:

Add some magic to config.toml

We want to tell Hugo that there’s a thing called JSON Feed, which is a JSON file, and we want to assign it a file extension. That’s easy enough. In your config.toml, just slam the following lines at the end:

  mediaType = "application/json"
  baseName = "feed"
  isPlainText = true

mediaType is the file’s MIME type, baseName is just the name of the file template before the extension2, and isPlainText tells Hugo that it shouldn’t do any HTML-related shenanigans. Whatever you slap after the . in outputFormats at the beginning, combined with the media type, defines the expected file extension, so everything we just wrote applies to files that end with .jsonfeed.json. Putting everything together, we’ve now told Hugo that feed.jsonfeed.json files are JSON Feed templates. So far, so good.

Next up, we tell it that we would like it to generate a JSON Feed if one exists. If you already have a section in your config.toml labeled [outputs] (you don’t by default), you’ll need to alter it, but otherwise you can just this at the end:

  home = ["html", "jsonfeed", "rss"]

All that says is, “hey, when you’re generating my home page, in addition to HTML and RSS (which are defaults), also generate this "jsonfeed" thing,” which (conveniently) we just defined.

Add a template for the JSON Feed

We told Hugo that our JSON Feed templates would end in jsonfeed.json and that the base name would be feed, so go create a file called feed.jsonfeed.json in the root of your content/ directory and put this in it:

  "version": "https://jsonfeed.org/version/1",
  "title": "{{ .Site.Title }}",
  "home_page_url": {{ .Permalink | jsonify }},
  "feed_url": {{ with .OutputFormats.Get "jsonfeed" -}}
    {{- .Permalink | jsonify -}}
  {{- end }},
  "items": [
    {{ range $index, $entry := first 15 .Data.Pages }}
    {{- if $index }}, {{ end }}
      "id": {{ .Permalink | jsonify }},
      "url": {{ .Permalink | jsonify }},
      "title": {{ .Title | jsonify }},
      "date_published": {{ .Date.Format "2006-01-02T15:04:05Z07:00" | jsonify }},
      "content_html": {{ .Content | jsonify }}
    {{- end }}

Most of that’s boring if you’ve seen the JSON Feed format description, but a couple of things to point out:

  1. We’re programmatically grabbing the JSON Feed permalink, rather than hard-coding it. If you have multiple feeds on your site (e.g., one per category), that’ll help things work out
  2. The {{ range $index, $entry := ... }} silliness is the only way in Go templates to handle fence posts. In this case, because JSON does not allow trailing commas, we need to prevent having an extra comma at the end, and the easiest way to do that is to inject a comma before every entry except the first. Caching the $index lets us easily do that (and taking advantage of 0 being falsy in Go templates makes the conditional short, too).
  3. Finally, the hyphens on some of the {{ ... }} injections deletes preceding (if it’s directly after the opening brace) and trailing (if it’s directly before the close brace) whitespace, which mostly isn’t programmatically necessary here, but keeps the JSON looking clean.

Add the <link> to your index page

The last step is to tell the world about your new feed. On your main index page, just add

  href="{{ with .OutputFormats.Get "jsonfeed"  }}{{ .Permalink }}{{ end }}"
  rel="alternate" type="application/json" title="{{ .Site.Title }}" />

There shouldn’t be anything surprising there. We’re reusing the {{ with .OutputFormats.Get ... }} trick from earlier to avoid hard-coding the feed URL, and the rest is straightforward templating.

So there you have it: that’s all it takes to add JSON Feed to your Hugo blog. I look forward to the next entry, in which we can explore how to add YAML Feed, EDN Feed, and maybe some custom Microsoft-specific extensions to both of those as well.

  1. No one tell them what HTML is. I really do not want to see JHTML. At least, not more so than I already have it with React. ↩︎

  2. "index" would’ve been another fine choice, and in line with other Hugo templates; I just found "feed" clearer. ↩︎