Short-distance drone delivery service#

It’s an approach that uses “uncontrolled airspace” and incremental purchases of cheap, standards compliant pads/drones to roll itself out (very similar to the way the Internet was able to piggy back on the old telephone system). […]

Here’s how it would work in practice:

  • My brother left his iphone at my house. I want to get it to him, but he lives 30 mi away (as the crow flies, 50 by driving).
  • I put it into a delivery container and put it on a small landing pad outside my home.
  • I order a drone on my phone and put the ID of the container into the order (I could just as easily use a drone I buy to do it P2P).
  • A drone arrives 10 minutes later, picks up the container automatically.
  • After a couple of hops, it arrives at my brother’s landing pad, where it drops off the container and alerts him with an e-mail/text.
  • Costs? Probably less than $0.25 per 10 mi. or so. So, about $0.75 in this instance. Time? An hour or so.

See also, Matternet.

(Via Kottke.)

We, the Web Kids#

We grew up with the Internet and on the Internet. This is what makes us different; this is what makes the crucial, although surprising from your point of view, difference: we do not ‘surf’ and the internet to us is not a ‘place’ or ‘virtual space’. The Internet to us is not something external to reality but a part of it: an invisible yet constantly present layer intertwined with the physical environment. We do not use the Internet, we live on the Internet and along it. If we were to tell our bildnungsroman to you, the analog, we could say there was a natural Internet aspect to every single experience that has shaped us. We made friends and enemies online, we prepared cribs for tests online, we planned parties and studying sessions online, we fell in love and broke up online. The Web to us is not a technology which we had to learn and which we managed to get a grip of. The Web is a process, happening continuously and continuously transforming before our eyes; with us and through us. Technologies appear and then dissolve in the peripheries, websites are built, they bloom and then pass away, but the Web continues, because we are the Web; we, communicating with one another in a way that comes naturally to us, more intense and more efficient than ever before in the history of mankind. […]

We have learned to accept that instead of one answer we find many different ones, and out of these we can abstract the most likely version, disregarding the ones which do not seem credible. We select, we filter, we remember, and we are ready to swap the learned information for a new, better one, when it comes along.

Quick thoughts on the death of Google Reader

March 13, 2013

The announcement that Google Reader is being shuttered hit me right in the gut (Hitler too). I really don’t even know where to begin with my comments (and frankly I have very little time to write this, so please bear with what likely will be a disorganized, stream-of-consciousness post).

Some of you reading this surely will be asking “WTF is Google Reader?”, but I suspect many of you reacted to the news the same way I did: “Fuck me”.

To be clear, I don’t care about the site per se (though that wasn’t always the case–for example, check out this effusive post from 2007 about switching to Google Reader, for good), I care about the underlying service that’s become the synchronized hub for all of my online news consumption (and that nearly every feed reader on the planet talks to, often exclusively). Most “power” RSS users stopped using the site years ago, after Google gave Reader an API and a slew of OS X and iOS apps came out that offered fantastic experiences that far outpaced what was possible through the web site.

Apart from being a great (and crazy fast) read/unread synchronization tool, the thing that I loved most about Google Reader (the site) was its introduction of mark-as-read-on-scroll. We take it for granted now, but back then it was a revelation, and most of the best-of-breed apps have adopted similar functionality over the years, though many at a snail’s pace (notably, Reeder still doesn’t have this (W-T-F?), but, I stopped using Reeder years ago in favor of Newsify and Mr. Reader, so no matter).

I think what I’m most disappointed about is that Google, as far as I know, never even entertained the idea of charging for the service, much less gave it a trial run. I’d pay good money for the service (and so would developers who have been making handsome livings off of apps that depend on Reader’s API), because it’s something I use, without fail, every single day of my life, and for the most part I really have no complaints. There definitely were some rough patches along its path to dominance, but it’s been pretty damn stable for a long time.

No doubt Twitter has eaten into RSS usage, and has had a role in the declining Reader engagement that Google cites…but not among power users, or, I’d bet, even most users. Twitter never could be an RSS replacement for me–an obsessive completionist–as I like to know exactly what remains in my queue, and I want to be able to jump into the aggregated mess on my own time, and without fear that I’m missing something because 24 hours went by and I wasn’t able to look at it. Plus, how quickly we forget the long fight for full-content feeds to become the norm, and now you’re trying to tell me that a 100-character title is enough? No thanks.

For many, this will be just the blow they need to give up on RSS for good, which sucks (especially for bloggers whose subscriber counts inform what they can charge for ad space), but of course this is great news for apps like the excellent Prismatic, and they know it.

Most major RSS apps will update well before the deadline to handle the slurping of feeds from disparate sources, instead of only from Google Reader (many already do), but the syncing problem likely will persist for some time to come (especially for the best apps), though there are some companies chomping at the bit to pick up where Google will leave off.

Keep in mind too that we’re just a few hours into this news–within a month there will be more services looking to fill this hole than any of us will want to deal with, but deal with it we will because we’re utterly addicted to information. Frankly, I don’t care who ends up winning this (potentially very lucrative) race to mass developer adoption and subsequent synchronization bliss, but I do hope the victor will allow us to pay for the service.

You know, actually, this may end up being the best thing to ever happen to RSS. Time will tell.

"Gesture" chair, from Steelcase#

The new, multiple devices we deploy throughout our work day allow us to flow between tasks, fluidly, and frequently. Gesture is the first chair designed to support our interactions with today’s technologies. It was inspired by the movement of the human body and created for the way we work today.

Blah, blah, blah…despite the marketing nonsense, this really is the first chair that has piqued my interest since picking up my beloved Herman Miller “Embody” a few years ago. Can’t wait to try it.

James Duncan Davidson reviews the Sony RX1#

On balance, he loves it, and for good reason. If you’re on the fence at all about the RX1, I encourage you to check out this review, because it touches fairly on all aspects of the camera—good and bad.

As some of you know, I picked up the RX1 at the end of 2012. After playing with one for just a few minutes (someone at work brought one in for me to check out) I immediately knew I wanted it. The urge caught me by surprise—a $2800, fixed-lens camera from Sony? Yep. I ordered one that day and put my Fujifilm XPro-1 up for sale (though of course I still have all my Canon DSLR gear, and still shoot with it more than anything else).

The fact is, the camera feels impossibly solid in the hand (Sony?!), has wonderful controls, and takes incredible pictures. It’s a very compelling, fun package that demands that you reevaluate how you feel about “small” cameras. For the time being, this tiny wonder puts Sony in a league of its own.

My biggest hangup with the RX1 is psychological. Frankly, I just kind of feel like an idiot using it. It doesn’t have a viewfinder (optical or electronic) and so to frame shots you’re left to hold the camera in front of you and look at the (stunning) LCD display…like an octogenarian tourist trying to get a snap of the Mona Lisa. The process just feels a little silly to me after having spent so many years hiding my face behind large DSLR bodies and lenses. You definitely look like a fresh-out-of-Best-Buy amateur with this thing.

Yes, there is an EVF available, but at $450, it, like the rest of the accessories Sony has released for this camera, is shamefully expensive. That said, and despite the fact that it does the svelte look and feel of the camera no favors, I may end up buying one.

Anyway, enough of my blabbering—go read JDD’s review.

The evolution of emotion#

Stephen Asma:

After you spend time with wild animals in the primal ecosystem where our big brains first grew, you have to chuckle at the reigning view of the mind as a computer. […]

Computer ‘intelligence’ might be impressive, but it is an impersonation of biological intelligence. The ‘wet’ biological mind is embodied in the squishy, organic machinery of our emotional systems — where action-patterns are triggered when chemical cascades cross volumetric tipping points. […]

What is increasingly clear is that we need more scientists who are willing to bridge the chasm between the new brain science of emotions and the natural history of life on the African savanna. Limbic emotions gave our ancestors their world of friends and foes, their grasp of food and its fatal alternatives. These emotions also motivated much of the social bonding that spurred the sapiens’ great leap forward. If we are to understand ourselves, this is the wild territory we need to rediscover.

Down in the Sago mine#

You guys know my penchant for lost-at-sea stories, and this one, about the 12 miners trapped in a West Virginia mine a few years ago, is in that same vein. Only one survived.

It’s riveting:

Some men stick their noses into their lunch pails to take a breath of air from home. […] > They sit with their backs to the walls, waiting for the sound of three dynamite blasts on the surface. Your training tells you to wait for that, then to start pounding on the roof bolts. You keep pounding until you hear five shots above, which means they have located you with seismic gear and are on their way. A sticker inside your hard hat tells you all that, and you have read it a thousand times. The crew listens. There is an occasional noise outside the curtain – mostly the falling of concrete blocks and other debris. Otherwise, it is dead quiet.

and devastating:

Tell all I’ll see them on the other side. I love you. It wasn’t bad. I just went to sleep. I love you.

The rise of .io domains for well-crafted web services#

Russell Beattie:

From the very first moment I heard of the .io TLD a few years ago, I thought it was absolutely fantastic. The geek in me just really responded to the idea of a domain name that ended in IO - the input/output connotation seemed like a perfect fit for web services.

I had the exact same reaction as Russell, and a few years ago I bought as fast as I could. I wasn’t quite sure what I was going to do with it at the time (I’ve since tied it to my Droplr account, and use it daily), but I just knew I had to have it.

Anyway, in an effort to support his theory that this TLD is for “techies with taste”, Russell’s post goes on to list—with thumbnails—191 .io-based sites/services. He’s right.

How a mechanical watch works#

As some of you know, I’m really into mechanical watches, and have spent an absurd amount of money chasing engineering perfection with these little machines you wear on your wrist. As far as I’m concerned, they’re the ultimate gadget.

Anyway, this video offers a wonderful description—using an oversized model—of how, exactly, mechanical watches store and translate energy into the movement of the dial, while maintaining accuracy. Though the video was created more than 60 years ago, the general principles are still very accurate with respect to the operation of modern mechanical watches.

How to generate a blog-wide word count in Jekyll

February 13, 2013

One of the “minor” tasks left on my to-do list since making the transition to Jekyll was to come up with a quick way to generate a blog-wide word count. This metric is just something I like to have handy (and I may end up putting it on the About page). (Some of you may remember that years ago I wrote a plugin for WordPress to do this very thing.)

Initially, I tried to tackle the problem from just the shell, and it is doable, but inaccurate. All of Jekyll’s blog posts exist in a single directory, and so the following does work:

wc -w * | tail -1 | cut -b -8

Obviously, this just pipes every blog post through the wc command. The problem though is that it doesn’t ignore the YAML front matter present in every post, thus adding to the count words that shouldn’t be included. Clearly, these extra words, especially over a very large site, can really skew your word count.

After that idea crashed and burned, I thought I could just come up with a regex that would grab the YAML headers, use grep or egrep to do the matching, and then pipe the inverse of the result into the wc command. I ran into a snag though after coming up with a regex, namely grep's inability to recognize modifiers. Specifically, I needed to specify “single-line” mode so that the “.” operator would match any character, including newlines.

After banging my head against the wall with that for a while, I just decided to tackle the problem in Python, and was able to whip up a solution pretty quickly, despite my inexperience with the language. The following is what I came up with:

import os
import re

path = '/path/to/jekyll/posts/'
wordCount = 0

# Regex to match YAML front matter and everything after it
regex = re.compile("(?s)(---.*---)(.*)")

# Iterate through all posts
for post in os.listdir(path):
    f = open(path+post, "r")
    result = re.match(regex,
    # Count words in everything after YAML front matter
    wordCount += len(
print "{:,}".format(wordCount) + " words!"

It’s probably pretty self-explanatory, but if you have any questions (or have a way to maybe make it more efficient or elegant), please feel free to email me.

On my home machine—a mid-2010 MacBook Pro (2.66GHz Core i7)—this script takes about 0.15 seconds to jump through ~3000 posts and spit out the result. (For those curious, the result is 352,802 words.)

Spiegelau IPA glass#

With input from two of the leading IPA brewers in the United States—Sam Calagione of Dogfish Head and Ken Grossman of Sierra Nevada—Spiegelau has created the new standard for IPA beer glassware. The Spiegelau IPA glass is designed to showcase varying aromatic profiles for the American ‘hop forward’ IPA beer, preserve a frothy head and volatiles, and maintain a comfortably wide opening for the drinker to “nose” the beer.

I drink almost nothing but IPAs these days (though last week in DC we went a little nuts with bottles of Deus (I loved it!)), and so these glasses are an auto-buy for me. Whether they have any effect whatever on the drinking experience remains to be seen, but I figure I have specific glasses for various other types of beer, so why shouldn’t my favorite kind of beer get the same treatment?

"Silicon Valley", a PBS documentary#

An eye-opening look at the birthplace of the modern technological era told by the people who shaped it, Silicon Valley is a fascinating reminder of how Robert Noyce and his team of trailblazers led the way in transforming California’s Santa Clara Valley into a worldwide hub of industry and innovation, and laid the bedrock for modern technology.

This was a real treat. If you’re into this sort of thing, then you’ll definitely want to check out The Idea Factory: Bell Labs and the Great Age of American Innovation, by Jon Gertner, which is one of just a handful of ★★★★★ books I read last year.

My friends troll me: A translation of select portions of "Using Keyboard Maestro to create new Jekyll posts"

February 10, 2013

One of my best friends recently sent me the following “translation” of an earlier post I wrote, and I thought some of you might enjoy it. I love him.

As I mentioned recently in Up and running with Jekyll, I hadn’t given any thought whatever to how I was actually going to go about automating the creation of new posts.

I spent at least 50 hours thinking about this, but couldn’t admit to that until I had a solution. Also, shameless plug for my previous post. Please read it.

Tonight I decided I would tackle that and ended going with a pure Keyboard Maestro solution.

By “tonight” I mean 4AM. By “I decided” I mean I’ve been awake for 72 hours and I physically can’t sleep until this is done. Help. Me.

(I may eventually end up just copying the AppleScript solution I came up with in Blogging with TextMate and Chromium/Chrome—which would be pretty easy to modify to handle the YAML front matter stuff I now need because of Jekyll—but I’m going to stick with the KM approach for now and see how it goes.)

I have over 13,000 hours of my life tied up in all of this and you lazy-ass n00bs just have no idea what kind of gold I’m giving away here. You don’t even care. Now read that post I linked to because the least you could do is give me another damn page view. I earned it.

The image below shows the entire KM macro I built for creating new linked-list posts. (The macro for regular posts is similar, but much simpler, and easily derivable from the following discussion.) While most of this macro probably is pretty self-explanatory, certain parts of it definitely aren’t, and so beneath the image I explain my thinking around those elements.

Buckle up, idiots, because by “self-explanatory” I mean that you would never have put this together in a MILLION YEARS on your own. I did it with a wife, an insanely busy job, and two hungry-ass cats to deal with.

The first thing the macro does is invoke the If Then Else action to determine whether I modified my hot key trigger for the macro with “Q”…

I know you’re just going to skip all of this, but whatever, I’m writing it out anyway, like a boss.

This worked just fine, but it wasn’t very elegant, and required Pause actions to be inserted in a couple of places because the GUI couldn’t keep up with the macro.

I’ve spent the last 6 hours convincing myself that the GUI not being able to keep up isn’t my fault. I finally believe it and am going to sleep.

Patterns, a regex tool for the Mac#

Patterns is a simple yet powerful tool for working with regular expressions. Build great patterns quickly and effortlessly with syntax coloring and with matching and replacing occurring in real time.


Sourced from over forty hours of 80s commercials pulled from warped VHS tapes, Memorex is a deep exploration of nostalgia and the fading cultural values of an era of excess. It’s a re-contextualization of ads - cultural detritus, the lowest of the low - into something altogether more profound, humorous, and at times, even beautiful.

Digging up long forgotten memories for a generation who spent their formative years glued to the boob tube, Memorex is a veritable nostalgia nuke for children of the 80s. Endless beach parties, Saturday morning cartoons, claymation everything, sleek cars, sexy babes, toys you forgot existed, station idents, primitive computer animation, all your favorite sugary cereal mascots, and so much more.

Memorex, and its predecessor, Skinemax, are two of the best things this child of the ’80s and ’90s has ever watched on the internet. They’re absolutely, positively brilliant.


A nostalgic look back at a half remembered childhood growing up in the 80s and early 90s, Skinemax takes a close look at the culture of that era. The images that motivated, delighted, and terrified us on the silver screen, set to propulsive modern music that pines for a simpler time.

How to merge two sitemap.xml files

January 26, 2013

Since flipping the switch on the Jekyll transition a couple of weeks ago, one of the to-dos that has persisted is what do about the fact that I now have two sitemap.xml files to contend with, the first from the regular blog, and the second from the photoblog. These obviously needed to be merged, and last night I whipped up the following Ruby code (explained below) to do just that.

desc "Merge two sitemap files"
task :merge do

    header, footer, content1, content2 = [] "/path/to/first/sitemap", 'r' ) do |f1|
        content1 = (IO.readlines f1)
        header = content1.slice!( 0..11 )
        footer = [ content1.slice!( -1 ) ]
        File.delete f1
    end "/path/to/second/sitemap", 'r' ) do |f2|
        content2 = (IO.readlines f2)
        content2.slice!( 0..11 )
        content2.slice!( -1 ) 
    end "/path/to/merged/sitemap", 'w' ) do |f3|
        f3.write ( header + content1 + content2 + footer ).join()

    puts "Sitemaps have been merged!"


Yeah, you could accomplish this in a million different ways using any number of languages, but I decided to go with Ruby (and Rake tasks) to keep with the Jekyll theme, and because I know nearly nothing about the language and thought it’d be fun. That said, if the above can be made any more efficient or elegant, please let me know.

As you can see, the code is rather simple, mainly because the structure of sitemap.xml files is rather simple, and so grabbing what we need from each isn’t too difficult.

This was written with two conditions in mind: 1) We have two sitemap.xml files being generated by two separate systems (be it Jekyll or whatever); and 2) The first sitemap.xml corresponds to a blog that is updated more frequently than the second.

The first thing we do is read in the sitemap.xml file of the blog that is updated most frequently. (You’ll need to change /path/ to correspond to the location of this file. If you’re using Jekyll, this file will be wherever you tell Jekyll to write your generated files, likely .../_site/.) The code stores each line in the content1 array, and then peels from that array information corresponding to the “header” and “footer” of the sitemap.xml file, which are stored in the header and footer arrays, respectively.

The “header” stuff is the XML declaration and the opening urlset tag required for sitemap.xml files. You’ll notice that the “header” in my code actually is 12 lines long; that’s because I’m also including the initial, root URL of the sitemap as part of the “header”:


The above is found in both sitemap.xml files, but I only want it to exist once in the merged file, and so I’ve decided to grab it from the first file.

The “footer” contains just the closing urlset tag.

Once the information from the first sitemap.xml file has been retained, we want to delete this file. The reason for this is because we don’t want two sitemap files—namely this first sitemap file and the merged one we create later—to be uploaded to our server when next we do a push/sync of the site. (Granted, web crawlers are going to read only the one you point them to, but why waste the time/bandwidth required to upload the unused file? In my case, it’d be an extra 700KB each time I pushed the site.)

The code next acts similarly on the second sitemap.xml file. (Again, you’ll want to change /path/ to point to this file.) We again remove from the content array (content2 this time) the “header” and “footer’ information, though we don’t store these anywhere as we already have them from the first sitemap.xml file. Unlike the first sitemap file, we don’t want to delete this one because it’s likely that we’re going to update our first blog again—before we update the second one—and we want the second sitemap file to be there this next time around, otherwise the merged sitemap won’t contain the second blog at all.

Finally, the code simply concatenates the information we’ve gathered, namely the header, all of the content from the first and second sitemap files, and the footer, and writes this to the new sitemap file that we specify (i.e., the one we’re going to want web crawlers to use). (If using Jekyll, this likely will be .../_site/sitemap.xml).

When to run?

You’ll want to run this after each build of your more frequently-updated blog. It’ll grab the current first sitemap.xml file (from the first blog), delete it, grab the current second sitemap.xml (from the second blog), and then write the combination to the final sitemap.xml you want to get pushed to the server when next you push/sync your site.

Obviously, if there’s a lot of lag between updating your second blog and then updating your first blog, the sitemap.xml file that exists on the server could be slightly outdated (i.e., it won’t contain the stuff recently added to the second blog), but this really isn’t a big deal, and will resolve itself when next you update the first blog.