Why I Took the Job

Almost four years ago today, I moved across the country and accepted a job at Yahoo!. But one of the main reasons I took the position happened six years before that.

In the Fall of 2001 I was a Sophomore in college at MTSU. Each morning I’d roll out of bed and open my Yahoo! home page. It was the first step in my morning routine. I’d check the news, check my email, then get ready for class.

On Tuesday the 11th I woke up at 7:45. The first thing I saw on Yahoo! was a headline that a plane had crashed into one of the towers. I clicked through to the article, but it was such breaking news the whole story was only three sentences long. It had just happened.

I woke up my roommate — a pilot himself — and turned on CNN just in time for both of us to watch the second plane hit live. Neither one of us spoke about it. We just sat there in silence watching the morning unfold.

I haven’t spoken to Chris in years, but if he’s anything like me, that image turned into one of the defining moments on our way to becoming adults. And looking back, we both would have missed it if not for the news being reported on Yahoo! that morning.

And so, six years later in September 2007, I was sitting in Starbucks with my Yahoo! offer letter in hand trying to decide. I remember thinking how much Yahoo! had indirectly changed my life that day and with a thousand other small contributions since then. And now I was given the opportunity to work for them and possibly impact millions of other people, too.

That’s why I took the job.

So tech pundits can write gleefully about the fall of Yahoo! — the many missteps they took during their short corporate history. But fuck ’em. I’m proud I got to work there and with so many incredible people for three full years. And I’m sad to see Yahoo! put themselves up for sale. There are few companies around with such reach — few that can claim to have changed the lives of so many people with nothing else but a few bits over the wire.

Switching From WordPress to Jekyll

Last week I finally took the plunge and completely switched this website from WordPress, which I had been using for over four years, to Jekyll. There are tons of articles online about switching, so I’m not going to attempt to write any sort of exhaustive guide about the process. These are just my own first impressions — one week in — along with a few lessons learned and a couple scripts I wrote to automate the process.

Why switch?

First off, let me be clear that I didn’t switch because of any failing on WordPress’s part. I’ve been a happy WP user for years, and I’d still recommend it to other web writers with no reservations. However, because of its dynamic nature, WordPress is succeptible two to problems that I got tired of dealing with:

  1. WordPress can be slow. Because WordPress renders your site from a database on each page view, it can quickly grind to a halt during a burst of traffic. And before you email me, YES, I’m well aware of all the caching best practices and plugins you can use to speed up things. But short of having WordPress output the entire site as static html files after each change, you’re always going to run into some initial PHP overhead. Even with WP-Super-Cache installed and tuned, this site became unresponsive the last two times I landed on Reddit and Hacker News. That’s unacceptable.
  2. Security updates are a bitch. That’s especially true for a self-hosted install of WordPress. Every security point release is an annoying fifteen minutes out of my day where I have to download the latest release, upload to my server, test for any regression issues, commit the changes into Git, etc. I’ve done this a thousand times before and frankly I’ve got better things to do with my time. I don’t blame WordPress for the security fixes. In fact, I applaud them for reacting so quickly. As the most popular blogging platform I know they’re a huge target and they do a great job managing that risk. I just don’t want to deal with it anymore. With static HTML files there is no attack vector to worry about.

  3. Let’s be honest. I’m a geek, and the thought of keeping my site organized as a few folders of text files in a git repo is awesome.

Switching

I had poked around and exprimented with Jekyll a few times before finally deciding to swtich, so I was already familiar with how the system works. (The docs are available if you want to know more.) As a bonus, I’ve been writing my blog posts in Markdown for years, so there were really only two steps between me and a fully static site:

  1. Pull all my blog posts and pages out of WordPress’s database and save them as Jekyll formatted text files.
  • Convert my existing WordPress theme into a Jekyll layout.

For those who are wondering, the whole proceess took about three hours on a Saturday night. Not too shabby.

Exporting Out of WordPress

The first big step towards migrating to Jekyll is getting all of your content out of WordPress into a format Jekyll can use. Buried deep inside the Jekyll Ruby gem is an importer script for most of the major blogging platforms including WordPress. Unfortunately for me, I don’t know Ruby, and I’m not familar with the gem system. I fooled around with their (seemingly out of date) instructions, but decided it would be faster and more foolproof just to write my own export script. Many of my pages and blog posts have custom post fields attached to them for setting things like page titles and URL slugs. Writing my own script ensured all those settings would come through during the export.

As for the script itself, there’s not much to it. It pulls all the content from your WordPress database and saves each post and page out as a Jekyll formated text file.

Building the Layout

Creating the Jekyll layouts were suprisingly simple. I basically just took my existing HTML as rendered by WordPress, saved it onto my desktop, and cut it up into a few template and include files. The layouts and includes are available to look through.

The flow of the templates is faily simple. Each Jekyll controlled page inherits from the layouts/default.html file.

{% include header.html %}
{{ content }}
{% include footer.html %}

The header.html and footer.html includes are just raw HTML that build out the bulk of the site. One thing to note inside each is that I’m using a bunch of Jekyll variables that are echoed out during the Liquid processing. Each page’s title and meta description is pulled from front matter defined in the corresponding Jekyll file. I’m also prefixing all of my static content URLs (images, stylesheets, JavaScript, etc) with a site.cdn variable which is defined globally. Currently, this points to my CDN domain on MaxCDN, but if they ever should go down (or if I switched away) I only have to change one line and re-run Jekyll to begin serving content from an alternate domain.

Any Concessions?

Yep, but only one. While I’m sure with enough hacking around I could have totally replicated my WordPress site’s structure, I didn’t want to spend a lot of time rebuilding a bunch of archive pages that didn’t matter other than (perhaps) for search engine ranking. So my old monthly archive pages as well as the indexed blog pages went the way of the dodo. To make up for the blog index, and to ensure my old content stays available in Google, I setup a simple index listing all of my posts, ordered by date.

Odds and Ends

Migrating to Jekyll gave me an opportunity to go through my four year old Amazon S3 bucket where I store all of my static content. A lot of cruft and abandoned files have built up over the years, so this was a good chance to clean it out. With a few thousand files to go through, I certainly wasn’t going to do it by hand. So here’s a quick script that scans a local copy of the bucket and checks each file to see if it’s referenced anywhere on my site. If not found, it deletes the unused file. It was incredibly easy to do since all of my content is now plain text. (Yay, Jekyll!)

For Any Mac Developers Out There

I’m putting this entire site’s content online in GitHub. Not because that’s where it’s hosted or deployed from, but simply so other people can poke around and hopefully find some useful snippet. Along with all the Jekyll stuff, you’ll also find quite a few PHP scripts buried in my Mac app folders. These are all the integration scripts that connect this site to Shine – the indie Mac dashboard I use to run my little software business. These scripts do things like process upgrades, serve downloads, display changelogs, etc. It’s all there. Just go exploring and you’re bound to find them. And if you have any questions about how/why I did something, feel free to ask.

Automatically Reposition the iOS Simulator on Screen

If you work with two monitors of different sizes, Xcode has an annoying bug of launching the iOS Simulator partially off screen — forcing you to manually drag it into position using the mouse. It’s not that bad the first time, but after a full eight hour working day with hundreds of launches, it gets very tedious.

Luckily, we can solve this with Xcode 4’s new “Behavior” settings and a little AppleScript.

Open up your favorite text editor and create the following script:

#!/bin/sh
osascript -e 'tell app "System Events" to set position of window 1 of process "iOS Simulator" to {-864, 134}'

Where where {-864, 134} are the {X, Y} coordinates you’d like the simulator positioned at.

Save the script somewhere appropriate and select it as a new “Run” command in Xcode’s “Run Starts” behavior.

Xcode Run Behavior

Creating a Universal Binary With Xcode 3.2.6

Last week I released a minor update to VirtualHostX. Shortly thereafter, my inbox was flooded with reports of an “unsupported architecture” error on launch. After a quick lipo test I verified that somehow I had managed to build and ship the app as Intel only — no PowerPC support.

I went through my git revision history and was able to track down the error. From what I can tell, the Xcode 3.2.6 update removes the default universal binary option. That’s fine for new projects, but I was completely taken by surprise to see it modify the build settings of an existing project.

Regardless, now that the (once famous) universal binary checkbox is gone, here’s how to add PowerPC support back.

In your target’s build settings, change “Architectures” to “Other” and specify “ppc i386 x86_64”.

Universal Binary Settings

Note: It’s entirely possible this whole episode was my fuck-up and not Xcode, but there are a bunch of similar reports online. So who knows? It certainly wasn’t mentioned in the release notes.

A Big Mouse Cursor

I spend about ten hours a day staring at two 27-inch Apple cinema displays. It makes coding great. But, with that much screen real estate, I keep losing my mouse cursor. I’ll have to jiggle it around for half a minute trying to find where it’s disappeared to.

No more!

Yesterday I discovered OS X has an option in the Universal Access Preferences pane that lets you adjust the size of the cursor from normal all the way up to holy-gigantic. I have mine set to a comfortable 33% — which is just big enough to keep from getting lost, but not so large that I can’t tell where I’m clicking.

I haven’t misplaced my cursor since.

Experimenting with Piracy – An Indie Mac Developer’s Perspective

For the last twelve months I’ve been keeping detailed records regarding the number of users pirating my Mac apps and toying with different ways of converting those users into paying customers. I’m not foolish enough to ever think I could actually eliminate the piracy — especially since I believe there are a few legitimate reasons for pirating software — but I was genuinely curious as to what the actual numbers were, the motivations behind doing so, and if there were any way I could make a small dent in those numbers.

A quick summary for those who don’t want to read the full post (tl;dr)

Software developers are foolish if they think they can prevent piracy. The only goals worth pursuing are

  1. Make it incredibly easy for honest customers to purchase your software.
  2. Find simple, non-intrusive ways of encouraging pirates to become paying customers.
  3. Retire to a sandy location with the tens of hundreds of dollars you’re sure to make.

A Bit of History and Harsh Reality

VirtualHostX 2.0 was released on July 19, 2009. Fake serial numbers first appeared (to my knowledge) on August 3, 2009. That’s fifteen days. Fifteen days for someone to take the time to engineer a fake serial number for a relatively unknown, niche app.

Nottingham was released on November 28, 2009. It took eight days for the first serial number to begin appearing.

Admittedly, the serial number scheme I used was incredibly simple. So it was no surprise that it was easy to crack. But seriously? Eight days? I doubt it took whoever did it more than an hour of actual work. I was just flabbergasted they cared enough to even take the time. A little honored truth be told.

So I did what any software developer would do. With each new software update I released, I made sure to ban the infringing serial numbers. Now, I fully realized the futility of what I was doing, but still — I thought that if I at least made it inconvenient for the pirating users to have to seek out and find a new serial number each time, maybe I’d win a few of them over.

Nope.

Rather than posting new serials, CORE (that’s one of the “teams” that release pirated software and serial numbers) simply put out “Click On Tyler MultiGen” — which was an actual app you can download to your Mac and use to create your own, working serial number for all of my products. Here’s a screenshot:

(It even plays music.)

So, with that out in the open (you can download it here), there was no point in banning serial numbers any longer.

Instead, I turned my attention towards measuring the extent of the piracy. I wanted to establish a baseline of how many users were stealing my app, so I could then tell if any of my attempts to counteract it worked.

I won’t go into the technical details of how I measured the number of pirated apps in use, but after a two month period I can say with high confidence that 83% of my users were running one of my apps with a fake serial number. Let that sink in.

Eighty-three percent.

Fuck.

Experiment #1 – The Guilt Trip

My first attempt at converting pirates was appealing to their sense of right and wrong. (I’ll pause while you finish laughing.) I released an update that popped up this error message when it detected you were using a fake serial number:

Two things worth noticing:

  1. I looked up the users first name (if available) from Address Book and actually addressed the message to them.
  2. They only way to dismiss the message was the guilt-trip-tastic “Sorry, Tyler!” button.

Sure those things were cheesy — the folks on the piracy websites actually mocked me for it — but I thought adding a little humanity (and humor) might make a difference. And it did.

Over the next three months I saw a 4% decrease in the number of users pirating my apps. Now, is that for certain because of my silly message? It’s possible, but I can’t be certain. Nonetheless, I thought it was a strategy worth continuing.

Experiment #2 – the guilt trip and a carrot

At the beginning of this year I decided to be a bit more proactive and actually offer users a reason to pay other than simply “doing the right thing”. So, I began showing this error message instead:

And I was serious. I presented the pirates with a choice. A one-time, limited offer that was only good right there and then. They could either click the “No thanks, I’d rather just keep pirating this software button” or they could be taken directly to my store’s checkout page along with a hefty discount.

(I was wary of doing this because I didn’t want to offend my real, paying customers who have been kind enough to part with their money at full price. I realize it’s not fair that honest users might pay more than the pirates. To them, I hope they’ll understand that I was simply trying to convert and make at least a little money off of users who were simply not paying to begin with. Hopefully the full-price you paid was worth it at the time and still is today.)

Did it work?

I was very careful to measure the number of times the discount dialog was displayed and the number of discounted sales that came through. The result? 11% of users shown the dialog purchased the app. I suspect that number might be a little higher as I’m sure some users saw the dialog more than once.

Despite 11% being a small number compared to the overall 83% piracy rate, I was thrilled. Most online advertisers would kill for an 11% conversion rate. I considered the experiment a success and let it continue on for a number of months until the numbers dwindled down to 5%, which brings us to today.

The Big Switch

Last month (April 2011) I released Nottingham 2.0 — and with it, a new serial number scheme that requires a one-time online activation. I’ve always been adamantly opposed to registration checks like this both as a developer and a user. But now that everyone is (almost) always connected, these checks don’t bother me as much as a user any longer. Especially if they’re unobtrusive and one time. Also, after seeing the raw numbers, the developer in me is now more concerned with buying food than lofty expectations.

I hope I’m not stirring up a hornet’s nest by saying this, but so far sales of Nottingham 2.0 are going well and piracy is virtually non-existent. Is that bound to change? Of course. I fully expect my scheme to be cracked at some point. But now that activation is involved, I have a much better view of when and how often it’s happening. Another benefit is that it’s no longer sufficient to pass around a serial number or even a key generator. Pirates will now need to patch the actual application binary (totally doable) and distribute that instead.

With those promising results in mind, I made the decision to convert my existing VirtualHostX 2.0 users to the new serial scheme as well. My goal — as always — wasn’t to stop the piracy but at least make a small dent in it.

My foremost concern was to make things simple for my existing customers. Under no circumstances did I want to annoy (or piss off) them. I couldn’t just invalidate all of their old serial numbers and send everyone an email with their new one. That would surely prevent someone from using the app right when the needed it the most. I had to make sure the switch was as frictionless as possible.

So, I toyed with different upgrade processes for a few weeks and finally settled on a system that I deployed with the 2.7 a few days ago. Here’s how it works.

The first time the user launches VirtualHostX after getting the automatic upgrade to 2.7, they’re shown this window:

I explained the situation as plainly as possible while also being upfront with the understanding that this is an inconvenience for them, not me, and the requisite apology. I also made it simple — one button to click — no other steps.

So, click the button, wait about five seconds and:

The app automatically connects to my server, validates their old serial number, generates a new one, and registers the app without any other user intervention. It’s all automatic.

So far the switch has gone well. I’ve seen about 30% of my registered users go through the update and have had exactly two emails — not complaining — but just confused as to what was going on. One customer even wrote in to say:

That was so painless. Great job on the messaging and single-click process. Very well done.

So that makes me feel good. Even though I wish I could have avoided the process, I’m glad it appears to be going smoothly. If any other developers ever find themselves in a similar situation, I can highly recommend this approach.

So That’s It

Many of the points I’ve written about are hardly new or exciting to anyone who’s written software or pirated it. So I’m not posting this as some sort of revelatory treatise. Rather, I just wanted to document the experiences I’ve gone through as a one-man software company who’s trying to earn a little money while keeping his users happy.

In the end, the most important thing you can do is be respectful of your users’ time by writing software they’ll love so much they can’t wait to pay for. Once you’ve got that down, then you can try and encourage the rest to pay up 🙂

Syncing Your Adium Chat Logs into Dropbox (again)

Two years ago I posted some quick instructions on how I keep my Adium chat logs synced between Macs using Dropbox. I’ve tweaked my setup slightly since then. Here’s my new approach.

First, if you already have Adium on multiple machines, copy all your logs over to a single Mac. You can merge the folders easily with an app like Changes. Once you’ve got a canonical folder of all your combined chat logs, place it somewhere in your Dropbox. Then…

cd "~/Library/Application Support/Adium 2.0/Users/Default/"
mv Logs ~/Desktop/LogsBackup
ln -s ~/Dropbox/Path/To/Adium/Logs/ Logs

Repeat the symlink steps on each Mac you want to sync.

This new approach keeps your files physically in Dropbox and a symlink in your Adium support folder pointing to their real location.

And that’s it. Enjoy.

(Really though, this would be a lot easier if Adium had an option to choose where your chat logs are stored.)

Unsupported Architecture When Submitting to Mac App Store

For any Mac developers out there who are seeing the following rejection notice when submitting to the Mac App Store:

Unsupported Architecture – Application executables may support either or both of the Intel architectures

Make sure you verify that any included frameworks are Intel only. You can do this using the lipo command:

lipo -info /path/to/SomeFramework.framework/Versions/A/SomeFramework

If the output lists anything other than i386 or x86_64 you’ll get rejected.

This was particularly painful for me because it appears this check is only run when submitting a new version of your app — PPC framework binaries don’t cause a rejection during the original app submission process. I thought I was going crazy since I had made no project changes since the first submission and running lipo on the app binary didn’t return anything unusual. Hopefully this will save someone else the hour of head scratching I just went through.

Search Mac and iOS Documentation From Chrome’s Omnibox

Earlier this week, the Chromium Blog announced an official extension API for Chrome’s omnibox (search bar). I’ve always loved keyboard driven interfaces — the command line, [Quicksilver](http://en.wikipedia.org/wiki/Quicksilver_(software)), Alfred, etc — so, I immediately started thinking about what I could build with it.

My first idea was a documentation browser for Apple’s Mac and iOS libraries. I’m always googling for class and framework names as a way to quickly jump to Apple’s documentation site. The problem is that many times the developer.apple.com link is buried down the page, which means I waste time scanning for the link rather than just hitting return for the first search result.

This extension solves that problem by allowing you to type “ios” or “mac” followed by a keyword. It then presents and auto-completed dropdown of matching search results which take you directly to the relevant page on Apple’s documentation site. Here’s a screenshot after typing “ios UIImage”

Sample iOS Chrome Search

For those among you wondering how I’m searching the Apple docs, I caught a lucky break. Apple’s Mac and iOS reference site includes a small search box that autocompletes your queries. I tried sniffing the network traffic to see what web service they were using for suggestions (hoping to hook into that myself) but found they were showing search results without sending any data over the wire. A little more digging and I realized they were pre-fetching a dictionary of results as a giant JSON file on page load. With that data — and a sample Chrome extension courtesy of Google — it took no time at all to connect all the pieces and get the extension working.

If you’d like to install the extension, just click here for Mac and here for iOS. You’re also welcome to download and improve the code yourself from the GitHub project page.

Sosumi for Mac – Find Your iPhone From Your Deskop

Every holiday, between the food and family, I always seem to find time for a quick project. Last year I built the first version of Nottingham over the Thanksgiving break. This year was no exception, and I found myself putting the final touches on Sosumi for Mac after an eighteen hour coding streak this weekend.

Sosumi for Mac builds on the original Sosumi project I started last Summer — a PHP script that returned the location of your iPhone by scraping MobileMe’s website and that eventually evolved to use Apple’s “official” API once that was released.

Last week, Apple pushed a rather large update to the Find My iPhone service and made it free to all users. Along with that came some API changes, which broke Sosumi. With help from Andy Blyler and Michael Greb, we managed to get it working again. I took the opportunity to go all out and write a native Cocoa implementation of Sosumi as well. And, with that done, I went one step further and built a full-fledged desktop app for tracking all of your iDevices.

Now that it’s complete, it’s much easier to simply open up Sosumi for Mac, rather than having to re-login to Apple’s website or iPhone client each time. The desktop app also opens up some fun possibilities. A future version could notify you when your spouse leaves work in the afternoon so you know when to begin preparing dinner. Or alert you if your child strays from their normal route on the way home from school. Or, since Sosumi provides your device’s battery level, you could even send alerts if your phone needs to be charged soon.

Admittedly, this kind of always-on location tracking can certainly be creepy. But that’s almost always the case with these types of applications. Whether Fire Eagle, Foursquare, or Google Latitude — it’s always a matter of striking a reasonable balance between convenience and privacy. I trust you’ll use Sosumi for good rather than evil.

Download Sosumi, read more about it, or grab the source on Github and build something even cooler.