Why I dread going to the Apple store#

My buddy, Scott McNulty, goes deep on something I’ve long been repelled by, namely the experience of trying to buy something in a crowded Apple store. Indeed, a couple of years ago, I said this on Twitter:

I hate buying stuff in Apple stores. “Hi. Can you take my money? No? Hi, can you take my money? In a minute? OK. Hi, can you take my money?”

Using Keyboard Maestro to create new Jekyll posts

January 19, 2013

As I mentioned recently in Up and running with Jekyll, I hadn’t given any thought whatever to how I was actually going to go about automating the creation of new posts. Tonight I decided I would tackle that and ended going with a pure Keyboard Maestro solution. (I may eventually end up just copying the AppleScript solution I came up with in Blogging with TextMate and Chromium/Chrome—which would be pretty easy to modify to handle the YAML front matter stuff I now need because of Jekyll—but I’m going to stick with the KM approach for now and see how it goes.)

The image below shows the entire KM macro I built for creating new linked-list posts. (The macro for regular posts is similar, but much simpler, and easily derivable from the following discussion.) While most of this macro probably is pretty self-explanatory, certain parts of it definitely aren’t, and so beneath the image I explain my thinking around those elements.

Keyboard Maestro Jekyll new post

The first thing the macro does is invoke the If Then Else action to determine whether I modified my hot key trigger for the macro with “Q”. I chose “Q” simply because this key press is meant to indicate whether I want to quote something in the body of the linked-list post. If “Q” was pressed along with the hot key trigger then we ensure that the variable doQuote isn’t empty; if “Q” was not pressed, then I’m not quoting anything from the article to which I’m linking and doQuote should remain empty. We’ll test for this later.

Next we save to the variable Quote what’s currently on the clipboard, which will correspond to what we want to quote, assuming we want to quote something; if we don’t want to quote something, then we’ll never use this variable.

Because this macro is meant to get invoked from your browser while you’re on the page you want to link to, the next few steps simply copy the address of the current page and save it to the Link variable.

Once this is done we display a dialog that asks for both the slug and the title of the post. The Slug variable is put ahead of the Title variable so that the keyboard focus will be on the input field for the slug, as that will always need to be entered, whereas the title may be fine as it is. (You of course can change this around however you like, and can even automate entirely the creation of both; that said, I prefer to define both explicitly.)

The next couple of steps create the file for the new post and open it in the text editor you specify. You’ll notice that both of these steps require the full path to where you keep your blog source files, and then it builds up the file name from ICU date/time tokens and the Slug variable we defined earlier.

The Open a File, Folder or Application action doesn’t actually work unless the file exists, and it seems KM doesn’t offer the ability to actually create a file (outside of simulating keystrokes, etc.). This limitation caused me to give the Write to a File action a go, and it turns out that this will create the specified file if it doesn’t exist, and the newly-created file doesn’t need to contain anything (kind of like the Unix touch command).

Finally, the macro uses the Insert Text by Pasting action to fill out the YAML front matter for the new post. We test whether the doQuote variable is empty; if it’s not empty then we include a blockquote for the text we copied before invoking the macro, and if it is empty then we don’t include that text.

You’ll notice that the external-url element gets the URL of the page to which we’re linking; this is the element I test against in all of my templates to determine whether I’m dealing with a linked-list post (though, yes, I could do it based off of category as well).

While you don’t have to have the date element in the YAML header (as the year/month/day is pulled from the file name itself), it’s necessary for keeping post order correct when multiple posts are published on the same day. Similarly, the slug element isn’t needed here either (because it too is defined by the file name), but I like to have it in here for all of my posts because it doesn’t hurt anything and could come in handy with future migrations.

The very last Set Variable action simply “resets” the doQuote variable; it seems its value persists between invocations of the macro and so sometimes this breaks the logic that’s based on whether it’s empty.


Obviously, instead of adding the quote/no-quote logic to the KM macro, you could just create two separate macros and call them with and without the "Q" key (or whatever it is you decide on), but I just preferred to have all of this in one macro.

Before the quote/no-quote stuff, I had it setup so that the ‘potential quote’ (i.e., whatever was on the clipboard before the macro got started) would be shown to you in the slug/title pop-up dialog, and if you didn’t want anything pasted in you could just delete whatever text was in this field. This, clearly, was not ideal, and I gave it no more thought once I was up and running using the If Then Else actions.

Finally, before settling on the flow shown in the image, I had an alternative version that worked by creating a new file from within your text editor of choice (using simulated keystrokes), enacting the “Save As…” dialog (using simulated keystrokes), getting to the proper directory via the Insert Text by Typing action and having KM type the full path (minus the file name), and then using the Insert Text by Pasting action to generate the proper file name using the variables previously discussed. This worked just fine, but it wasn’t very elegant, and required Pause actions to be inserted in a couple of places because the GUI couldn’t keep up with the macro.

Up and running with Jekyll

January 13, 2013

As most of you know I’ve been thinking for a very long time about getting this site off of WordPress and the photoblog off of Pixelpost, and getting both away from a VPS I manage. Once the decision to jump was made, the problem became one of migration, and it wasn’t easy: this site is over 10 years old and consists of ~3,000 posts, and the photoblog is a different animal entirely.

Add to that some anal-retentive obsessiveness and you get many marathon hacking binges during which you lose sight of all that’s good in the world and sometimes forget that you’re human and probably should get up and pee.

This write-up is meant to be kind of an overview of certain things I had to deal with during the migration, and how I came to decide on certain aspects of the system, etc. As with most of the more technical posts on this site, it’s documentation and reference for me as much as it is for anyone who might stumble across it.

Why Jekyll and not Octopress?

Octopress is a framework that sits on top of Jekyll, and so if the goal was for me to reduce complications and dependencies to a bare minimum, then it was a no-brainer that I would just have to bite the bullet and go with Jekyll–one less thing to worry about in the long run.

I knew it’d be a bit more work up front, but I figured it was probably worth it in the end, and after slogging fully through the migration I’m pretty sure I made the right decision.

(Given that Octopress is kind of just a wrapper around Jekyll, and that its underlying blog source files must conform to formats Jekyll requires, most of the earlier work I did in the WordPress→Octopress migration was easily ported over to the Jekyll installation.)

Why Jekyll and not Squarespace?

As I wrote and talked about quite a bit, I was pretty set on moving to Squarespace at one point, especially after being given early access to their Template Developer Kit, and the CEO’s ear.

Believe or not, I got damn near everything up and running on Squarespace, designed the blog and photoblog using the TDK, and actually, for the past few months, was publishing to Squarespace (privately) everything I was publishing to Wordpress and Pixelpost. I threw a lot of time at it.

All was well…until a couple of weeks ago when I logged in to my account and noticed that the permalinks for all of my posts were incorrect; the /year/month/ structure had been stripped, and all that was left was /slug. I played around with various settings, but could never get it back to how it had been since I got my intial import working.

The /year/month/title directive used to, correctly, substitute the slug for the title if a slug was present (my import file contained slugs for every post), but now it seemed that if a slug was present, the /year/month/title structure was being disregarded and only the slug was used in its place.

Anyway, I’m really not sure what went wrong, and didn’t spend too much time trying to figure it out–I just decided that if I was ever going to be truly happy with my setup, I needed as much control as possible, and at this point in time that meant Jekyll.

Why Amazon S3?

Why not? It is Amazon after all. Though before this I had never used any of their AWS products, I have the utmost confidence in their services, and had been hearing good things about hosting static sites on S3. I feel very comfortable having 10 years of my work sitting on their servers.

(You might remember that I considered pushing to Github Pages at one point, but had to abandon that idea when I realized there was no way to put a repository within a sub-directory of another repository, which I’d need to do if I wanted the photoblog to reside at /photos.)

Route 53

The move to S3 static hosting necessitated moving the DNS for hypertext.net to Amazon’s DNS service, Route 53. This mostly was a pretty simple transition: I signed up for Route 53, created a new zone for hypertext.net, pointed my registrar to the Route 53 nameservers, and deleted from my VPS the DNS stuff for hypertext.net.

The only real hang-up came when mapping the domain to the S3 bucket from which I planned to serve the site. Due to some terrible UI/UX on the Route 53 console, I wasted five hours of my life trying to resolve an issue that didn’t exist. If interested, you can read about the problem in this post I made on the Route 53 forum.

Redirecting justinblanton.com

Until a few days ago I wasn’t sure how I was going to handle the domain-wide redirects of justinblanton.com requests to corresponding hypertext.net requests. As explained in this earlier post, I was kind of resigned to just using the cheapest hosting provider I could find that could handle the kind of redirection I needed, but then I came across this comment from the AWS team on one of their blog posts, which says that the query string is redirected together with the domain. So, it seems I may be able to handle this entirely using S3 + Route 53, which is awesome. I won’t make that transition for a while still, but it’s good to know there are options, and that one of them keeps me entirely within the AWS ecosystem.

Syncing with S3

I’m not entirely sure how this element is going to play out. There are a number of options out there, the laziest and most inefficient of which is to just re-upload all the content each time I make the slightest change to any file.

This, of course, is beyond stupid (especially since most changes will affect only a small percentage of the total files during a rebuild of the blog), and something I’d never do unless there was no other option, but it’s nice to know that, in a pinch, a literal upload-and-overwrite is all that’s needed to update the content.


When doing all the local development I’d sometimes use Jekyll-s3 to push to an accessible S3 “dev” bucket I created. This worked well and good for a couple of weeks, but in the past day or so (since I deployed the site), it’s stopped working. It just hangs. Does nothing. Breaks completely. Instead of worrying too much about it, I just fell back on s3cmd (discussed below), and it’s proving to be a great option.

Before getting away from Jekyll-s3, I want to talk for a second about using it (when it works ;) to upload multiple blogs to the same S3 bucket. Jekyll-s3 won’t let you specify a subdirectory to which to push the files, and so I had to come up with another solution, which ended up being stupid simple.

In the photoblog’s Jekyll installation, you just have to tell Jekyll to write the files to /sub-dir, so that when these files are pushed to the S3 bucket they get pushed to the correct folder. To achieve this, I added these lines to the _config.yml file for my photoblog’s Jekyll installation:

baseurl: /photos
destination: _site/photos

Not for nothing, but it seems modifying pagination_dir in _config.yml does nothing. I thought about digging into the code, but ultimately decided to just prepend the pagination links (between photos and between pages) with /photos.


As mentioned, I previously got two Octopress installations up and running (blog and photoblog)and worked out a way to push them to the same S3 bucket. This was accomplished using s3md.

I ended up doing something very similar for my two Jekyll blogs. I created a Rakefile for both blogs, and so to sync each of them to my hypertext.net bucket, I simply run rake sync from within their respective installations. The Rakefile for the main blog looks like this:

task :default => :sync
desc "Sync with S3"
task :sync do
    sh "s3cmd sync _site/* s3://hypertext.net/"

(The photoblog’s Rakefile is the same, except for the addition of /photos/, as appropriate.)

When I have time I’ll probably at least look into other solutions (e.g., S3Sync, rsync + FUSE (to mount S3 locally), etc.), but so far this s3cmd method seems to be working absolutely beautifully.


Clearly, and as usual, I was going for a very minimal look. If I feel I can get away with making it even more minimal, I probably will. There still are a ton of little design things all over the site that I want to refine and tweak, but I think the general aesthetic is mostly set at this point.

If the colors look familiar, it’s because they’re from iA Writer. You might remember a post I wrote on how to make any app look like iA Writer–I pulled the color scheme from that post.

For what it’s worth I did all development locally and against the Chrome dev channel builds. After I was satisfied with that I played around in the other major WebKit browser, Safari. I figured this covered the majority of my traffic (including iOS and Android clients). I gave no mind at all to IE and Firefox; at the moment I just don’t care if something breaks in those browsers. I likely will never worry about IE (life’s too short), but might make some Firefox-specific changes at some point.


I debated very seriously getting away from FeedBurner, but ultimately decided to stick with it, if for no reason other than that I couldn’t come up with a great reason to leave it. I have quite a few subscribers that use FeedBurner’s email feature, and I didn’t want to disrupt that.

I can’t remember exactly why now (and if I gave it any real thought I’d probably realize there’s no longer a purpose for it), but for the past ~10 years(!) I’ve had my feed at /syndicate, and when I started using FeedBurner however many years ago, I set up a temporary redirect to the FeedBurner feed, like so:

Redirect temp /syndicate http://feeds.feedburner.com/jblanton

Initially, I thought I wouldn’t be able to replicate this with Amazon S3, but after futzing with their new redirect feature I came up with the following rule, which does exactly what my old redirect did:


(I do the same thing for the photoblog feed, but with the addition of /photos, as appropriate.)


Unfortunately, there is just no practical way for me to automate the migration from Pixelpost. Accordingly, I’m having to do it manually, and I’m nowhere near done with it.

I went ahead and migrated the 10 most recent posts, and then created an eleventh post, which just tells the reader that the migration isn’t complete; that will live as the first post (in time) until I’m done with the migration.

I used wget to pull down the entire photoblog and structure, and will use those files, together with accessing the database via Sequel Pro, to build the photoblog in Jekyll as time permits.

What’s left to do?

I’d say the migration is about 90% complete at this point. What remains mostly are things that only I know or care about.

The biggest thing probably is just getting the remainder of the photoblog posts into the new system. That’s going to take some time.

Creating new posts

Honestly, I just haven’t put much time into this just yet, but trust that when I do I’ll come up with something that will automate to the extent possible the creation and publication of new posts, and I’ll of course write it up here.

Currently search of the site has been outsourced to Google. Before this switch to Jekyll I used WordPress’s native search for the blog, and offered no search for the photoblog. Since farting around with Google site search I’ve realized (kind of accidentally) that you can limit the scope of the search via the value you use for the q variable in the search form. This means that I can offer search functionality on the photoblog and have the results limited to only those pages that exist within /photos. Pretty cool.

All of that said, I think I’ll probably look into offering some sort of ‘local’ search, and get away from having to rely on Google. How this might work with a completely static site, I’m not entirely sure, and frankly, it might not be possible.

Relatedly, there are still some things I need to work out with sitemap.xml stuff given that I’m essentially running two completely seprate blogs on one domain.

UTF encoding issues

You’re still reading?! I could go on and on about the UTF encoding issues I had when converting the XML file of my WordPress export to the individual post files, but seriously, who the hell wants to read about that? (I discussed it briefly here.) Anyway, this is something I’m going to have to revisit when I can catch my breath.

OMG, go to bed!

As ever, please feel free to email me if you’ve questions about anything I touched on in this post. I suspect many of you will be making a similar move soon, and I’m excited to see how far we can push this stuff.

The books I read in 2012

December 30, 2012

Below is a list of the books I managed to get through in 2012 (see also 2010 and 2011). I think we can all agree that this is a pretty crazy list by any standard, and probably even more so given my hectic work schedule; relatedly, most of these were consumed between 1:00 AM and 3:00 AM.

It’s all non-fiction of course, and my usual mix of science, technology, psychology, and evolution. The one outlier here probably is my newfound obsession with Howard Hughes. In the last month alone I’ve torn through four books about the man, and am just starting my fifth. I can’t get enough.





Supr Slim wallet#

I make it a point to carry as few cards as possible, because, well, why the hell not? I carry just my license, a Simple card, and an AMEX credit card. That’s it. I don’t carry cash.

The last thing I want to do is increase the thickness of this stack any more than necessary, so when I came across the Kickstarter project for the Supr Slim wallet, I (along with 6,236 others) backed it immediately.

I received the wallet a few weeks ago, but used it for just a couple of days before reverting to my beloved Güs card case (“splitshot perforation black”, for those wondering).

To be clear, there was nothing wrong with the wallet. It was exactly as promised—a piece of elastic, closed off at the bottom. It’s constructed well (I mean, as far as a piece of elastic can be) and had no problem holding onto my admittedly short stack of plastic; I had no worries that a card would fall out.

The problem for me was the look and feel of it. I just didn’t like it. To me it looked kind of silly in person, and, if I’m being totally honest, I just didn’t like taking it out in public. Moreover, getting a single card out of it isn’t terribly easy, and I was working with just three cards; I imagine that the more cards you shove in there, the harder it is to get at the one you want.

Study finds epigenetics, not genetics, underlies homosexuality#

[S]ex-specific epi-marks, which normally do not pass between generations and are thus “erased,” can lead to homosexuality when they escape erasure and are transmitted from father to daughter or mother to son. [...]

The study solves the evolutionary riddle of homosexuality, finding that “sexually antagonistic” epi-marks, which normally protect parents from natural variation in sex hormone levels during fetal development, sometimes carryover across generations and cause homosexuality in opposite-sex offspring.

301 vs. rel="canonical"

December 09, 2012

Ever since transitioning this site early last year to hypertext.net from justinblanton.com, I’ve used an Apache virtual host configuration to effectively forward every request made to justinblanton.com to its corresponding hypertext.net URI, using the following rule:

RewriteRule ^(.*) http://hypertext.net$1 [R=301]

Easy-peasy. This has served me very well for a year and a half—requests to justinblanton.com are automatically converted into corresponding hypertext.net requests and the URI shown in the address bar is subsequently rewritten into the hypertext.net version.

Now that I’m on the verge of moving this site off a VPS and to another service (likely Squarespace, and not Octopress)–OK, fine, I’ve been on the verge for months, but I’m busy!–I’m having to give some thought as to how to approach the aforementioned domain-wide redirection.

I’ve been told that Squarespace can do the rel="canonical" thing pretty well, and that with multiple domains you can simply specify a default domain to which all requests will be redirected. This may be a perfectly fine solution, though I have some SEO concerns given that, essentially, two mirrored versions of my site will exist on the web.

Another option is to find the absolute cheapest hosting service I can that will let me do the kind of redirection I require, and use it for only that. I’d rather stay away from this method if only to keep the number of accounts/services I rely on to a minimum.

Another potential option (or so I thought) that has recently become available is Amazon’s Web Page Redirect. Just a couple of months ago, when I first began looking at Amazon S3 as an option for hosting Octopress files (see this piece I wrote about getting two Octopress installations to play nice with a single S3 bucket), there was no .htaccess-type support, which I thought was really odd.

Since then, they’ve announced their redirection feature, and it seems like it would give me exactly what I need—almost—and very likely for a price that would be much less than any hosting service, especially given that I wouldn’t actually be doing anything but redirection (i.e., no actual hosting). The issue, though, is that it seems there’s no way to “wildcard” the redirection—i.e., each of my thousands of posts would need an individual redirect rule. This, obviously, is a non-starter.

If you’ve any thoughts, I’d love to hear them.

Human evolution enters an exciting new phase#

Geneticist Joshua Akey:

We’ve gone from several hundred million people to seven billion in a blink of evolutionary time. That’s had a profound effect on structuring the variation present in our species. [...]

We have a repository of all this new variation for humanity to use as a substrate. In a way, we’re more evolvable now than at any time in our history.

David Solomon's quest to find the greatest headphones ever made#

Having been on this here Internet thing for two decades now, and over that time having read damn near everything of even remote interest to me, I think I can say confidently that this article—which weighs in at over 75,000 words—is the most epic and comprehensive of its kind.

Even if you aren’t into headphones, you have to respect the time and energy it took to assimilate this behemoth. Indeed, taking David at his word that he put at least “50 hours of critical listening time” into each headphone, then he spent a minimum of 2800 hours on this project…before he ever typed a single word. The mind boggles.

Relatedly, I was happy to see my current daily drivers—the JH Audio JH16 Pro custom IEMs—on the list, though he has me thinking I might want to “downgrade” to the JH13 Pro.