Rockwell – Sort of like a private Foursquare meets Fire Eagle

Back in 2008, when I worked for Yahoo!, I had the good fortune of chatting with Tom Coates a few times about the now defunct Fire Eagle location brokerage service. Fire Eagle was my absolute favorite product to come out of Yahoo! during my time there. I’ve always been fascinated by real-time location data and sad that Fire Eagle’s intersection of privacy and ease of use never caught on.

Anyway, fast-forward to last Summer, I was bummed that although so much of my life is documented through photos, tweets, and journal entries, my location data was missing. A few products tried to solve this problem post-Fire Eagle. Google Latitude (nĂ©e Dodgeball) gave you a way to seamlessly track your location and view your history, but they’ve since sunsetted that product. And, besides, it was a little creepy having Google track your every step. (Which I realize they still are via Google Now, but that’s another conversation.) There was also other apps like Rove and Foursquare, but none of them offered quite the feature set I was looking for. In 2010, I even went so far as to reverse engineer Apple’s Find My iPhone API so I could remotely query my phone’s location. That worked great, but doing so on a regular basis killed my battery life. But, with iOS 7’s advances in background location reporting, I knew there had to be a better way. I wanted something that would automatically track my location as precisely as possible, respect my phone’s battery life, keep that data private by default, yet still offer the ability to granularly share my location as I saw fit.

So I did what I always seem to do. I built my own app. It’s called Rockwell, and it’s available on GitHub.

Rockwell consists of two components. An iPhone app that monitors your location and allows you to annotate where you are using Foursquare’s database of named locations. And a PHP/MySQL web service you can install on your own server that the app talks to.

As you go about your day, the iPhone app uses iOS’ significant location change reporting feature to ping the web app with your location. The web app stores your location history and allows you to go back and search your history either by date or by location.

Further, since the website knows your most recent (current) location, you’re able to share that with others. In my opinion, one of the reasons Fire Eagle failed was it was (by design) very difficult to get location data out of the service. You had to go through an intricate OAuth dance each time.

With Rockwell, you simply choose the location level you’re comfortable sharing – either precise lat/lng coordinates, current city, current state, etc – and a format – plain text or JSON – and Rockwell generates a short url you can pass around. You can use that URL to embed your plain text location on your blog, or you can use the JSON version to do something more API-ish with it. There’s no OAuth shenanigans to deal with. You can have as many short geo-links as you want, and each one can be revoked at any time.

One more thing I’d like to explain. The iPhone app reports two types of check-in data to the web service. The first kind is dumb. Just your latitude, longitude, and timestamp. Many apps like Rove and Foursquare use this data to try and generate automatic location check-ins. Based on your past history, your friends’ location, and your location, they try and guess where you might actually be at a given time. Doing this well is the holy grail of location reporting. The problem is that I’ve yet to see any service get it right. In a dense urban area, with hundreds if not thousands of locations per square mile, there’s just no reliable way to figure out where you really are with precision. (At least not without some serious machine learning, which I assume Foursquare is working on.) Rockwell dances around this problem by allowing you to augment your dumb check-ins with annotated ones. Just launch the app, tap “Check-in”, and Rockwell pulls a list of nearby locations from Foursquare. Just tap on one and you’re done. It’s saved to the web service alongside the rest of your location history.

Rockwell is working great for me currently. All the basics work. The majority of the remaining work is around coming up with nice ways to show your location history in a useful way. The code is available on GitHub. I’d love it if you gave it a try, sent feedback, and maybe even a pull request or two.

Automatically Compressing Your Amazon S3 Images Using Yahoo!’s Smush.it Service

I’m totally obsessed with web site performance. It’s one of those nerd niches that really appeal to me. I’ve blogged a few times previously on the topic. Two years ago, (has it really been that long?) I talked about my experiences rebuilding this site following the best practices of YSlow. A few days later I went into detail about how to host and optimize your static content using Amazon S3 as a content delivery network. Later, I took all the techniques I had learned and automated them with a command line tool called s3up. It’s the easiest way to intelligently store your static content in Amazon’s cloud. It sets all the appropriate headers, gzips your data when possible, and even runs your images through Yahoo!’s Smush.it service.

Today I’m pleased to release another part of my deployment tool chain called Autosmush. Think of it as a reverse s3up. Instead of taking local images, smushing them, and then uploading to Amazon, Autosmush scans your S3 bucket, runs each file through Smush.it, and replaces your images with their compressed versions.

This might sound a little bizarre (usless?) at first, but it has done wonders for mine and one of my freelance client’s workflows. This particular client runs a network of very image-heavy sites. Compressing their images has a huge impact on their page load speed and bandwidth costs. The majority of their content comes from a small army of freelance bloggers who submit images along with their posts via WordPress, which then stores them in S3. It would be great if the writers had the technical know-how to optimize their images beforehand, but that’s not reasonable. To fix this, Autosmush scans all the content in their S3 account every night, looking for new, un-smushed images and compresses them.

Autosmush also allowed me to compress the huge backlog of existing images in my Amazon account that I had uploaded prior to using Smush.it.

If you’re interested in giving Autosmush a try, the full source is available on GitHub. You can even run it in a dry-run mode if you’d just like to see a summary of the space you could be saving.

Also, for those of you with giant S3 image libraries, I should point out that Autosmush appends an x-amz-smushed HTTP header to every image it compresses (or images that can’t be compressed further). This lets the script scan extremely quickly through your files, only sending new images to Smush.it and skipping ones it has already processed.

Head on over to the GitHub project page and give Autosmush a try. And please do send in your feedback.

PHP Wrapper for Yahoo! GeoPlanet

Earlier this month I wrote a quick PHP wrapper for Yahoo!’s GeoPlanet API. It’s a super useful service for querying geographical information about nearly any place on earth — addresses, landmarks, colloquial locations, etc. Or, as the official description says

GeoPlanet helps bridge the gap between the real and virtual worlds by providing an open, permanent, and intelligent infrastructure for geo-referencing data on the Internet.

There were already Perl, Python, and Ruby wrappers. I figured I’d throw PHP into the mix.

Pete Warden has already written some example code that uses the wrapper to emulate Twitter’s nearby location search.

You can download the source from GitHub. As usual, it’s licensed under the MIT License and free to use in any way you like.

Download All of Your Flickr Photos and Sets

iLife ’09 was released today. And with it came a much improved version of iPhoto with facial recognition, geotagging, and Flickr and Facebook support. With so many new ways to slice and dice my photos, I wanted to start over with a clean slate and get everything organized in iPhoto before re-exporting my library back to Flickr or wherever.

Before I could do that, I had to download all of my photos out of Flickr so I could import them into iPhoto. I wanted something simple that would also retain my photos in their correct sets when downloading. I found one program for Windows, but nothing for Mac. (Did I missing something obvious?)

So here’s a simple PHP script that uses the phpFlickr library to download all of your photos. It creates a folder for each set plus an extra one for photos not in a set. This grabs your original, full-size photos — both public and private.

You can get the script from my Google Code project here. You’ll also need to download and include a copy of the phpFlickr source from here.

Building a Better Website With Yahoo!

It’s been a long time coming, but I finally pushed out a new design for this website last month. I rebuilt it from the ground up using two key tools from the Yahoo! Developer Network:

The new design is really a refresh of the previous look with a focus on readability and speed. I want to take a few minutes and touch on what I learned during this go-round so (hopefully) others might benefit.

Color Scheme

Although I really liked the darker color scheme from before, it was too hard to read. There simply wasn’t enough contrast between the body text and the black background. I tried my best to make it work — I searched around for various articles about text legibility on dark backgrounds. I increased the letter spacing, the leading, narrowed the body columns, and everything else I learned in the intro graphic design class I took in college. The results were better, but my gut agreed with all the articles I read online which basically said “don’t do it.” So I compromised and switched to a white body background, while leaving the header mostly untouched. I find the new look much more readable — hopefully this will encourage me to begin writing longer posts.

CSS and Semantic Structure

The old site was built piecemeal over a couple months and, quite frankly, turned into a mess font-wise. I had inconsistent headers, font-weights, and anchor styles depending on which section you were viewing. With the new design, I sat down (as I should have before) and decided explicitly on which font family, size, and color to use for each header. I specced out the font sizes using YUI’s percent-based scheme which helps ensure a consistent look when users adjust the size. (Go ahead, scale the font size up and down.) An added bonus was that it forced me to think more about the semantic structure of my markup. (If you have Firefox’s Web Developer toolbar installed, try viewing the site with stylesheets turned off.) If there’s one thing I learned working for Sitening, it’s that semantic structure plays a huge part in your SERPs.

Optimizing With YSlow

At OSCON last summer, I attended one of the first talks Steve Souders gave on YSlow — a Firefox plugin that measures website performance. That, plus working for Yahoo!, has kept the techniques suggested by YSlow in the back of my head with every site I build. But this redesign was the first time I committed to scoring as high as I could.

As usual, I coded everything by hand, paying attention to all the typical SEO rules that I learned at Sitening. Once the initial design was complete and I had a working home page, I ran YSlow.

YSlow Before

Ouch. A failing 56 out of 100. What did YSlow suggest I improve? And how did I fix it?

  • Make fewer HTTP requests – My site was including too many files. Three CSS stylesheets, four JavaScript files, plus any images on the page. I can’t cut down on the amount of images (without resorting to using sprites – which are usually more trouble than they’re worth), so I concatenated my CSS and JS into single files. That removed five requests and brought me up to an “A” ranking for that category. (I’m further toying with adding the YUI Compressor into the mix.)
  • Use a content delivery network – At Yahoo! we put all static files on Akamai. Other large websites like Facebook, Google, and MySpace push to their own CDNs, too. But what’s a single developer to do? Use Amazon S3 of course! I put together a quick PHP script which syncs all of my static content (images, css, js) and stores them on S3. Throughout the site, I prepend each link with a PHP variable that lets me switch the CDN on or off depending on if I’m running locally or on my production server. (And, in the event S3 ever goes down or away, I can quick switch back to serving files off my own domain.)
  • Add expiry headers – Expiry headers tell the browser to cache static content and not attempt to reload it on each page view. I didn’t want to put a far future header on my PHP files (since they change often), but I did add them to all of the content stored on S3. This is fine for images that should never change, but for my JavaScript and CSS files it means I need to change their filename whenever I push out a new update so the browser knows to re-download the content. It’s extra work on my part, but it pays off later on.
  • Gzip files – This fix comes in two parts. First, I modified Apache to serve gzipped content if the browser supports it (most do) — not only does this cut down on transfer time, but it also decreases the amount of bandwidth I’m serving. But what about content coming from S3? Amazon doesn’t support gzipping content natively. Instead, in addition to the static files stored there, I also uploaded their gzipped counterparts. Then, using PHP, I change the HTML links to reference the gzip versions if I detect the user’s browser can handle it.
  • Configure ETags – ETags are a hash provided by the webserver that the browser can use to determine if a file has been modified before downloading it. Amazon S3 automatically generates ETags for every file — it’s just a free benefit of using S3 as my CDN.

So, all of the changes above took about three hours to implement. Most of that time was spent writing my S3 deploy script and figuring out how to make Amazon serve gzipped content. Was it worth it? See for yourself.

YSlow Before

Wow. Three short hours of work and I jumped to a near perfect 96 out of 100. The only remaining penalty is from not serving an expires header on my Mint JavaScript.

Do these optimization techniques make a difference? I think so. Visually, I can tell there’s a huge increase in page rendering time on both Firefox and Safari. (IE accounts for 6% of my traffic, so I don’t bother testing there any longer.) More amazing, perhaps, is the site’s performance on iPhone. The page doesn’t just load — it appears.

I’ve made a bunch of vague references to the S3 deploy script I’m using and how to setup gzip on Amazon. In the interest of space, I’ve left out the specifics. If you’re interested, email me with any questions and I’ll be happy to help.

Microsoft and Yahoo! – My Take

A favorite theory
among tech pundits is that the company that will eventually dethrone Google
doesn’t exist yet. Rather, it’ll be the brainchild of former Googlers who quit
when their current job becomes boring or their stock options vest. Imagine a
mass exodus – hundreds of very rich, very smart engineers with nothing but
free time and creativity on their hands.

I like that idea. It has a certain romanticism to it. But what if the next,
great tech giant doesn’t come from a Googler – but a Yahoo?

If (when) Microsoft buys Yahoo!, how many developers will stick around for
what is sure to be a righteous clash of cultures? In his letter to Yahoo!,
Steve Ballmer writes that Microsoft intends “to offer significant retention
packages to [Yahoo!’s] engineers.” Forty-four billion dollars
is certainly enough to buy the company. But does Microsoft have enough money
to buy the loyalty of Yahoo!’s engineers? Many of whom are working at Yahoo!
precisely because they don’t want to work for a company like Microsoft.

Could this be the deal that launches a thousand Silicon Valley startups?
There’s no shortage of creativity brewing at Yahoo!. Attend any one of their
internal Hack Days
and you’ll find a wealth of amazing ideas and side-projects just waiting for
the right opportunity. How many of those could turn into successful businesses
if they were taken outside the company? I’d wager quite a few. And how many of
those could turn into the Next Great Thing?

There’s bound to be at least one.

I have no doubt Steve Ballmer and the rest of Microsoft have examined every
conceivable financial outcome of buying Yahoo!. I’m sure they’ve given great
thought to how the two companies will combine their infrastructures. I’m sure
there has been much talk of “synergy.” But as it’s become increasingly clear
that Microsoft just doesn’t get Web 2.0 and the disruptive power a social
network can wield, I wonder if they understand the implications of a thousand
new startups — each working in parallel to finish the job that Google
started.

Navigate Yahoo! Search Results Using Only Your Keyboard

Google has an experimental search page
where you can test drive new search result layouts. My favorite is the
Keyboard Shortcuts option. This lets you navigate and view search results
using only the keyboard – no mouse required! It’s a huge benefit for
Quicksilver fans.

Now that Yahoo! is my default search engine, I desperately didn’t want to give
up the keyboard shortcuts feature. Thanks to jQuery and
Greasemonkey I don’t have to.

Make sure you’re using Firefox and have the Greasemonkey plugin loaded. Then,
click here to install.

I’ve chosen the same shortcut keys as Google. J and K move you up and
down through the search results. Press return to view the selected link.
You can even browse through multiple pages when you hit the end of the
results. Pressing / will jump you to the search box if you’d like to run a
new search. Hit escape to move back to the results.

You can view the source on Google Code.