Missing Rdio and Making the Best of Apple Music with Shortcuts

Man, I miss Rdio. I mean, I really miss it. I loved that service.

When I was a teenager, I’d spend hours on the weekend and get lost in new and used music stores (CD’s) just digging through stacks of beautiful album artwork and unfamiliar band names. I’d talk with other customers and ask the clerk to let me sample a few tracks when something caught my eye. The joy was in the discovery as much as the actual purchase and listening that came later.

Rdio was the first streaming music service I used. It was like walking into an infinitely large music store. And it was all free! (Well, $10/month.) Their UI was wonderful. Websites today are walled gardens designed to keep you on the property for as long as possible. But Rdio, like the web of the late 90’s and early 2000’s, was overflowing with links leading through a maze designed to get lost in. Each album page – in addition to the artwork and song listing – had detailed info about the band, their other music, related artists, genres, etc. (The only similar mainstream experience I can think of today is when you fall down a Wikipedia rabbit hole.) And the majority of the pages had an in-depth critic’s take on the music in addition to listener submitted reviews that were generally well written and free of the awfulness we see in YouTube’s and Facebook’s comments section today.

What I’m trying to say is Rdio came very close to recreating the record store experience in digital form. Apple tried and failed with Ping and has since made additional social attempts inside Apple Music. I can’t really speak about what Spotify is like now. I was a subscriber for a few months after Rdio died, but it never really stuck for me – and Apple Music’s tight integration with Mac and iOS have kept me tied to their service instead.

So using Apple Music the last few years has been fine I guess. I can play what’s in my own collection (usually), and search their streaming library, but I find it extremely difficult to organically discover new music. It’s sort-of possible on the desktop with iTunes, but the iOS app (and I took a look just now to double-check) only shows “other albums by this artist.” The “For You” tab makes an attempt by showing other genres similar to what you already listen to, but I find their algorithmic recommendations lacking. And, again, if you do tap on one of the suggested albums, that’s about as far as you can go. You can’t further explore beyond that artist. And don’t even get me started on the “Browse” section or whatever the hell Beats 1 is doing. That’s a dumpster fire of shitty editorialized content that I can only assume is mass promoted by the record labels for the masses. (Yes, I might just be snobby and elitist about my music, but I really do have a lot of pop music in my collection that I enjoy. I just find most of Apple’s selections…shallow.)

Anyway, like I said, it’s fine. Not anything special, but fine.

But over the last few months I’ve made a conscious effort to start listening to more music again. I used to always have something playing in my bedroom, dorm, various apartments, and later houses. But I think once my kids were born, their needs and noise took over and music fell to the wayside. But now I’m using the wonderful Anesidora app to keep Pandora shuffling through songs in my office where I sit all day. I need to stay focused on my work, and having to think about and choose something to play takes me out of the zone. I like that I can just tell Pandora to play something it thinks I’ll enjoy and it will take care of the rest. It’s mindless and exactly what I want.

But Pandora typically only plays music I’ve already listened to and given a thumbs-up. It rarely surfaces new music. That’s what I still have to try and use Apple Music for. And I typically try and do that when I’m in the car.

Using your phone for anything while driving is stupid. So if I want to queue up some music, I have to do it when I’m still in the driveway or if I think I have time and it’s safe while stopped at a traffic light. But I need to be fast about it. And that’s where Marvis, Launch Center Pro, and Apple’s Shortcuts app come into play.

I discovered Marvis last month from Ryan Christoffel at MacStories. It’s a highly-customizable client for Apple Music. All of your songs, playlists, and the entire Apple Music catalog in a gorgeous, functional UI that you can design around your own needs. Here’s what my setup looks like:

Marvis Pro Screenshot

I’ve got the Home screen organized so that I can tap and play my most listened to playlists and albums without scrolling or having to dig through Music.app’s tabs and navigation stacks.

Specifically, at the top I can start any music I’ve recently added to my library. I’ll often go on a music adding binge and add a ton of stuff at once then finally listen to it days or a week or so later. This section collects all those albums in one spot so I don’t forget to try something new that looked interesting to me.

Beneath that are three of Apple Music’s main auto-generate playlists. Again, I have one-tap access to my Favorites when I want to hear something familiar and New Music that Apple thinks I might like (which is often hit or miss).

Further down the screen is “New For You”, which is a stream of new releases from artists already in your library. I’ve wanted this feature in iTunes for years, and I’m thrilled Apple Music delivered.

Of note: I’ve used the display settings in Marvis to pack as much music into as small a space as possible. This puts as many tap targets within “thumb reach” as possible and minimizes any scrolling I need to do. Very important when that red light could turn green at any moment.

Next up, if I want even faster access to my most common playlists, I’ve created three shortcuts in Shortcuts.app to play Apple’s “New Music for You”, “Your Favorites”, and their top Alternative songs and added them to Launch Center Pro’s excellent Today widget. From my phone’s home screen, I can swipe left and tap to start playing.

Launch Center Pro Screenshot

And going a bit further with Shortcuts, I’ve added two as icons on my home screen:

  • “Play Album”, which starts playing the full album that the current song belongs to. This is super useful when I’m listening to a suggested music playlist and it plays a new artist I’d like to hear more of.
  • And “Bookmark Song”. This adds the current song to a playlist I made called “Bookmarks”. I treat it like an Instapaper for music that I can come back to later when I have time to explore.

Shortcuts Screenshot

So, that’s my music setup at the moment. I achingly miss Rdio but am trying to make the best of Apple Music by making it as easy as possible to listen to the music I love and explore the new songs it thinks I’ll enjoy.

Moving back to Google – just a little bit

I’ve been hosting my company‘s email with FastMail since 2008. They’re amazing. But my personal email had been with Gmail since the service was in beta in 2004. (And everything before Gmail lost to time and bit rot. Sigh.)

Around five years ago, I started getting nervous with so much of my online identity tied to an address that I was essentially borrowing and had no real control over. I was never worried about Google losing any of my data, but I had heard countless horror stories of Google’s AI flagging an account for some type of violation and locking out the user with no recourse.

If I ever lost access to my primary email account, I’d be dead.

So I began the rather annoying process of moving all of my online accounts over to use a new address at a domain I control. FastMail imported everything from my old Gmail and Google Calendar account, and with the help of 1Password, I was able to methodically switch my email everywhere else over the course of a few weeks.

I’ve been using my new address full-time for the last five years and now get only two or three non-spam emails a month to my old Gmail account.

Soon after switching emails, I began to question my other dependencies on Google. Along with Facebook, I started worrying about all the data they were collecting on me. I was also concerned about how I was playing a part in their monopoly over the web as a whole.

So I switched to using Duck.com as my full-time search engine. And I gave up Chrome in favor of Firefox. I even tried using Apple Maps as much as possible. In short, even if the alternative service wasn’t on par with their bigger competitor, I felt it was worthwhile to give them my support to encourage a more balanced ecosystem.

The switch mostly went well. I felt like the search results I got with Duck.com were good enough. I only had to fall back to Google for the occasional technical query. Firefox also made great strides with its support for macOS during that time with their Quantum project. And Apple Maps, despite all the awful reviews online, worked just fine navigating around Nashville for me.

But over the last year I’ve started, slowly, coming back to Google’s services.

It all started with Google Photos. I (mostly with the help of my own backup strategies) trust iCloud with my family’s photo archives. But Apple just makes it too inconvenient to use with a partner. Because of the way iCloud is siloed per user, my library is completely walled off from my wife’s. That means I can’t see photos of my kids that she takes. And she can’t see mine. Google Photos supports connecting your library with another person’s. (While that’s a super useful feature, we don’t do that. For our workflow, it’s easier just to sign into my Google account in Google Photos on my wife’s phone so everything funnels into one primary account.)

And while Apple’s Photos.app AI-powered search is mostly-good, it’s limited by their privacy stance and what they can process on-device. And the result is that it can’t even begin to compete with the ways I’m able to slice, dice, sort, and organize my photos with Google.

Is Google using the faces and location data in my photos to train their robot overlords? Most definitely. Do I care? Yes. But is it enough to outweigh the benefits I get from their otherwise amazing offering that I pay $10/month for? For me, no.

Added to that is the degradation in quality I’ve seen in Duck.com’s search results since last year. I’m not sure what changed under-the-hood, but I found myself having to search a second time in Google way more frequently to the point where I just gave up and made Google my default choice in January.

I’ve been a paying customer of Dropbox since 2008 (or 2009?). But because of the $10/month I was paying Google for extra photo storage space (2TB) (which I get to share with my wife’s Google account), and the $10/month I pay for extra iCloud storage (which I also get to share with my wife), it just didn’t make sense to keep paying for Dropbox as well when I could use Drive instead. And you know what? After using Drive for the last six months I’ve found that it’s really quite nice. Especially with the added benefits of everything integrating with Docs and Spreadsheets and their very capable (but decidedly non-iOS and ugly!) mobile apps.

Further, although not really that important, I’ve also migrated my calendars from FastMail back to Google Calendar simply because every other service in the world that wants to integrate with my calendar data (and that I want to give permission to) supports Google’s protocol but not standard CalDAV. It’s a shame, but I’ve decided to make my life easier and just go with it than wall myself off by taking a principled stand for open data.

What does this all mean?

I still use Firefox. I stick with Apple Maps when possible. But I’ve slowly moved back to Google’s services in cases where they’re so far ahead of the competition I just can’t help it, which has created a bit of a halo-effect with their complimentary services.

And in a most-decidedly un-Googly turn of events, customers of their Google One extra-storage plans can now talk to a Real Live Human if something goes wrong. That gives me much more confidence in my precious data’s longevity with them, which is what drove me away from Gmail in the first place.

Dammit, Google. I don’t trust you. But I can’t quit you, either.

A Faster Way to Create Multiple Tasks in OmniFocus (with all sorts of details!) Using Drafts.app

Following-up on my previous post about using Drafts to create new GitHub issues, here’s another action I built and use all the time.

This allows you to create multiple tasks in OmniFocus with defer dates, due dates, and tags in one step.

It does this by parsing a compact, easy-to-write syntax that I’ve adopted from other OmniFocus actions and tweaked to my liking and then converting it into TaskPaper format, which can be “pasted” into OmniFocus in one go. This removes the need to confirm each individual action separately.

Yes, you could also do this by writing your tasks in TaskPaper format directly, but I find its syntax (while innovative!) a bit cumbersome for quick entry. The format this action uses isn’t as feature-rich, but it does everything I need and with less typing.

Instructions:

Each line in your draft becomes a new task in OmniFocus, with the exception of “global” tags and dates, which I’ll describe later.

Each task goes on its own line and looks like this:

Some task title @defer-date !due-date #tag1 #tag2 --An optional note

The defer date, due date, tags, and note are all optional. If you use them, the only requirement is that they come AFTER the task’s title and the “–note contents” must be LAST.

The defer and due dates support any syntax/format that OmniFocus can parse. This means you can write them as @today, @tomorrow, @3d, @5w, etc. If you want to use a date format that includes characters other than letters, numbers, and a dash (-), you’ll need to enclose it in parenthesis like this: @(May 5, 2019) or !(6/21/2020).

Global Defer/Due Dates:

By default, tasks will only be assigned defer/due dates that are on the same line as the task title. However, if you add a new line that begins with a @ or ! then that defer or due date will be applied to ALL tasks without their own explicitly assigned date.

Global Tags:

Similarly, if you create a new line with a #, then that tag will be added to ALL tasks. If a task already has tags assigned to it, then the global tag(s) will be combined with the other tags.

Full Featured (and contrived) Example:

Write presentation !Friday #work
Research Mother's Day gifts @1w !(5/12/2019) --Flowers are boring
Asparagus #shopping
#personal
@2d

You can install the action into your own Drafts.app from the action directory.

Creating New GitHub Issues From Drafts.app

After last week’s post about how to create a GitHub issue with image attachments from an email, I thought I’d try and speed up how quickly / easily I’m able to create new issues that don’t come from customer emails – i.e., the ones that just randomly occur to me.

Drafts is my preferred way of capturing text and ideas on Mac and iOS and then doing something with it. It has tons of scripts (actions) to do just about anything, and you can write your own if you need something custom.

So, after a quick look through GitHub’s API docs, I put together this script for Drafts.

It fetches your most recently active repos, presents them in a dialog prompt to pick one, and then creates a new issue in that repo using the contents of the current draft. Simple. Fast. Awesome. And a lot easier than trying to navigate GitHub’s mobile website.

You can install the action into your own Drafts.app from the action directory.

Backing Up Everything (Again)

This will take a while. Bear with me.

I’m obsessive about backing up my data. I don’t want to take the chance of ever losing anything important. But that doesn’t mean I’m a data hoarder. I like to think I’m pragmatic about it. And I don’t trust anyone else to do it for me.

From around 2006 to 2012, I kept a Mac mini attached to our TV with a Drobo hanging off the back. It had all our downloaded movies on it. And every night it would automatically download the latest releases of our favorite TV shows from Usenet so my wife and I could watch them with Plex the next day. It worked great, and all the media files were stored redundantly across multiple hard drives with tons of storage space. (Would it survive a house fire? No. But files like that weren’t critical.) But with the rise of streaming services and useful pay-to-watch stores like iTunes, now I’d rather just pay someone else to handle all of that for me. So, I don’t keep any media files like that locally any longer.

But my email? My financial and business documents? My family’s photo and home video archive? I’m really obsessive about that.

For most of my computing life, all of that data was small enough to fit on my laptop or desktop’s hard drive. In college, I remember burning a CD (not a DVD) every few months will all of my school work, source code, and photos on it for safe keeping. The internet wasn’t yet fast enough to make backing up to a cloud (were clouds even a thing back then?) feasible, so as my data grew I just cloned everything nightly to a spare drive using SuperDuper and Time Machine. It worked for the most part. Sure, I still worried about my house catching fire and destroying my backups, but there really wasn’t an alternative other than occasionally taking one of the backup drives to work or a friend’s house.

But then the internet got fast, really fast, and syncing everything to the cloud became easy and affordable. I was a beta user of Gmail back in 2004. I was an early paid subscriber of Dropbox since around 2008. All of my data was stored in their services and fully available on every computer and – eventually – mobile device. At the time, I thought I had reached peak-backup.

I was wrong.

Now we have too much data. My email is around 20GB. My family’s photo library is approaching 500GB. That’s more data than will fit on my laptop’s puny SSD. It will fit on my iMac, but it leaves precious available space for anything else. I could connect external drives, but that gets messy and further complicates my local backup routine. (Yes, Backblaze is a good, potential solution to that.)

Another problem is that most of our data now is either created directly in the cloud (email, Google Docs, etc) or is immediately sent to it (iPhone photos uploaded to iCloud and/or Google Photos), bypassing my local storage. If you trust Google (or Apple) to keep your data safe and backed up, that’s great. I don’t. I’ve heard too many horror stories about one of Google’s automated AI systems flagging an account and locking out the user. And with no way to contact an actual human, you’re dead in the water along with all your data. Especially if you lose access to your primary email account, which is the key to all your other online accounts.

So, I need a way to backup my newly created cloud data, too. This is getting complicated.

First step. My email. This is easy. Five years ago I setup new email addresses for my personal and business accounts with Fastmail. They’re amazing. I imported my 10+ years worth of email from Google (sadly, my pre-2004 college email and personal accounts are lost to the ether), setup a forwarding rule in Gmail, and with the help of 1Password, changed all of my online services to use my new email. It took about a month to switch everything over, but now the only email coming to my old Gmail address is spam. Fastmail keeps redundant backups of my email. And I have full IMAP copies available on multiple computers in case they don’t. And if something ever goes wrong, unlike Google where their advertisers are the customer – and I’m the product – I pay Fastmail every month and can call up a live human to talk to.

Source code. I’m a paying GitHub customer. Everything’s stored and backed up there. But still, what if they screw up. I ran a small, self-hosted server with GitLab on it for a while instead of GitHub and set it to backup all my code nightly to S3. That worked great. But, I like GitHub’s UI and feature set better. Plus, it’s one less server I have to manage. So, where do I mirror my code to? (Much of my code is checked out locally on my computer, but not all of it.)

Back in 2006, my boss at the web agency I was working at told me about rsync.net. They provide you with a non-interactive Unix shell account that you can pipe data to over SFTP, rysnc, or any other standard Unix tool. You pay by the GB/month, and they scale to petabyte sizes for customers who need that. So, I signed up and used them to backup all of my svn (remember svn?) repos. With the rise of git and switch to GitHub, I cancelled my account and mostly forgot about them.

But, aha!, I now have new data storage problems. Rsync.net could be a great solution again. So, I re-signed up and setup my primary web server to mirror all of my GitHub repos over to them each night. Here’s the script I’m using…

Next up, important documents. Traditionally, I’ve kept everything that would normally go in my Mac’s “Documents” folder in my Dropbox account. That worked great for a long time. But once I started paying Google for extra storage space for Google Photos (more on that later), it felt silly to keep paying Dropbox as well. So, after 10+ years as a paid subscriber, I downgraded to a free account and moved everything into Google Drive. Sure, it’s not as nice as Dropbox, but it works and saves me $10 a month.

Like I said above, I mostly trust Google, but not entirely. So, let’s sync my Google Drive’s contents to rsync.net, too. Edit your Mac’s crontab to add this line…

30 * * * * /usr/bin/rsync -avz /Users/thall/Google\ Drive/ user@server.com:google-drive

Also, I keep all of the really important paperwork that would normally be in a fire safe in my garage in a DEVONthink library so I can search the contents of my PDFs. It’s synced automatically with iCloud and available across my mobile devices. But still, better back that up, too.

45 * * * * /usr/bin/rsync -avz /Users/thall/FireSafe.dtBase2 user@server.com:

So, that’s all of my data except for the big one – my family’s photo and home video archives.

For a long time I kept all my family’s archives in Dropbox. I even made an iOS app dedicated to browsing your library. I could have stuck everything in Apple’s Photos.app where it’s available on my devices via iCloud, but that’s tied to my Apple ID. My wife wouldn’t be able to see those photos. Plus, any photos she took on her phone would get stored in her iCloud account and not synced with the main family archive. So, we used the Dropbox app, signed-in to my account, to backup our phones’ photos.

But, like I said earlier, our photo and video library become to big to comfortably fit in Dropbox. Plus, Google Photos had just been released and it was amazing. Do I like the thought of Google’s AI robots churning through my photos and possibly using that data to sell me advertisements? No. But, their machine-learning expertise and big-data solutions make it really hard to resist. So, I spent a week and moved everything out of Dropbox into Google Photos.

Now everything is sorted into albums, by date, and searchable on any device. I can literally type into their search box “all photos of my wife’s grandmother taken in front of the Golden Gate bridge” and Google returns exactly what I’m looking for. It’s wonderful.

My wife’s phone has the Google Photos app installed with my account on it so every photo she takes gets stored in a shared account we can both access and view on all our devices.

But what’s the recurring theme of this blog post? That’s right. I don’t fully trust any cloud provider to be the only source of my data. Someone clever said “the cloud is just someone else’s computer.” That’s exactly correct. If your data isn’t in at least two different places, it’s not really backed up.

But how do I backup my 500GB+ of photos that are already in Google’s cloud? And then how do I keep new items recently added synced as well?

As usual, I tried to find a way to make it work with rsync.net. I found a great open-source project called rclone. It’s a command line tool that shuffles your files between cloud providers or any SFTP server with lots of configurable options and granularity.

First off, even if rclone does do what I need, I can’t just run it on my Mac. My internet is too slow for the initial backup. I need to use it on one of my servers so I have a fast data center to data center connection between Google and rsync.net.

Getting it setup on one of my Ubuntu servers at Linode was a simple bash one-liner. Configuring it to then work with my Google and rsync.net accounts was just a matter of running their easy-to-use configuration wizard.

Note: rclone doesn’t support a connection to Google Photos. Instead, you need to login to Google Drive on the web and enable the “Automatically put your Google Photos into a folder in My Drive” option in Settings. (And also tell your Google Backup & Sync Mac app not to sync that folder locally – unless you have the space available – I don’t.) Then, rclone can access your Google Photos data via a special folder in your Drive account.

With everything configured, I ran a few connection tests and it all worked as expected. So, I naively ran this command thinking it would sync everything if I let it run long enough:

rclone copy -P "GoogleDrive:Google Photos" rsync:GooglePhotos

Things started out fine. But eventually, due to Google API rate limits, it was quickly throttled to 300KB/sec. That would have taken MONTHS to transfer my data. And, the connection entirely stalled out after about an hour. I even configured rclone to use my own, private Google OAuth keys, but with the same result. So, I needed a better way to do the initial import.

Google offers their Takeout service. It lets you download an archive of ALL your data from any of their services. I requested an archive of my Google Photos account and eight hours later they emailed me to let me know it was ready. Click the email link to their website, boom. Ten 50GB .tgz files. Now what to do with them?

I can’t download them to my Mac and re-upload them – that’s too slow. Instead, I’ll just grab the download URLs and use curl on my server to get them, extract them, and sync them over.

I don’t have enough room on my primary web server – plus I don’t want to saturate my traffic for any customers visiting my website. So, spin up a new Linode, attach a 500GB network volume, and we’re in business. Right? Nope.

The download links are protected behind my Google account (that’s great!) so I need a web browser to authenticate. Back on my Mac, fire up Charles Proxy and begin the downloads in Safari. Once they start, cancel them. Go to Charles, find the final GET connection, and right-click to copy the request as a curl command including all of the authentication headers and cookies. Paste that command into my server’s Terminal window and watch my 500GB archive download at 150MB(!!)/sec.

(Turns out, extracting all of those huge .tgz files took longer than actually downloading them.)

Finally, rsync everything over to my backup server.

And that’s where I currently am right now. Waiting on 500GB worth of photos and videos to stream across the internet from Linode in Atlanta to rsync.net in Denver. It looks like I have about six more hours to go. Once that’s done, the initial seed of my Google Photos backup will be complete. Next, I need a way to backup anything that gets added in the future.

Between the two of us, my wife and I take about 5 to 10 photos a day. Mostly of our kids. Holidays and special events may produce a bunch more at once, but that’s sporadic. All I need to do is sync the last 24 hours worth of new data once every night.

rclone is the perfect tool for this job. It supports a “–max-age=24h” option that will only grab the latest items, so it will comfortably fit within Google’s API rate limits. Once again, setup a cron job on my server like so:

0 0 * * * rclone copy --max-age=24h "GoogleDrive:Google Photos" rsync:GooglePhotos

And, that’s it. I think I’m done. Really, this time.

All of my important data – backed up to multiple storage providers – and available on all of my and my family’s devices. At least until the whole situation changes yet again.

A few more notes:

All of my web server configuration files are stored in git. As are all of my websites’ actual files. But, I still run an hourly cron job to backup all of “/var/www” and “/etc/apache2/sites-available” to rsync.net since it’s actually such a small amount of data. This lets me run one command to re-sync everything in the event I need to move to a new server, without having to clone a ton of individual git repos. (I know I need to learn a better devops technique with reproducible deployments like Ansible, Puppet, or whatever the cool tech is these days. But everything I do is just a standard LAMP stack (no containers, only one or two actual servers), so spinning up a new machine is really just a click in the Linode control panel and couple apt-get commands and dropping my PHP files into a directory.)

My databases are mysqldump’d every hour, versioned, and archived in S3.

All of the source code on my Mac is checked out into a single parent directory in my home folder. It gets rscyn’d offsite every hour, just in case. Think of it as a poor man’s Time Machine in case git fails me.

I do a lot of work in The Omni Group‘s apps – OmniFocus, OmniOutliner, and OmniGraffle. All of those documents are stored in their free WebDAV sync service and mirrored on my Mac and mobile devices.

All of my music purchases have gone through iTunes since that store debuted however many years ago. I can always re-download my purchases (probably?). Non-iTunes music ripped from CDs long ago, and my huge collection of live music, is stored in iTunes Match for a yearly fee. A few years ago when I made the switch to streaming music services and mostly stopped buying new albums, I archived all of my mp3s in Amazon S3 as a backup. I need to set a reminder to upload any new music I’ve acquired as a recurring task once a year or so.

Also, I have Backblaze running on my desktop and laptop doing its thing. So yeah. I guess that’s yet another layer of redundancy.

GeoHooks

I’ve always been fascinated with geo technologies and location based services. When I worked for Yahoo!, I was always bugging Tom Coates and Gary Gale about all things geo – including the sadly ahead of its time FireEagle web service.

Anyway, for the last two years I’ve been tinkering off and on with an idea of my own – geohooks. They’re webhooks that are triggered based on the location of you, another person, or a combination of multiple people.

I’m really happy to announce that https://geohooks.io is now available for people to beta test. You can sign-up for free here: https://app.geohooks.io/beta.php You’ll also need our iPhone app. You can get in on the test flight magic by @’ing me here or on Twitter or by email.

So what can GeoHooks do? Well…

  • Call a webhook when you enter or leave a specific geofenced area
  • Send an SMS to your spouse when you leave work and you’re on your way home
  • Send an SMS to your spouse when you leave work that also inculdes Google’s traffic estimate
  • Turn off the lights in your smarthome when both of you leave the house
  • Keep track of how long you’re at work each day
  • View a live map of where all of your account members currently are
  • Trigger any service on IFTTT
  • Securely share your current location to 3rd party web services with a level of accuracy you control (pour one out for FireEagle)

And much, much more.

Anythign you can trigger with a URL, you can now control with your location. GeoHooks is location-based webhooks for hackers, with a focus on privacy.

I’d love your feedback.

Coding on My iPad Pro

Last month, my 9-5 job was kind enough to gift me an iPad Pro and its new keyboard. I’ve had a few iPads in the past, but they’ve always ended up stashed away, unused, in a drawer somewhere. I simply never got hooked on their utility. I never found that killer app, which, for me, would be the ability to code anywhere. This Pro model, however, has changed all of that.

I’ve always had two Macs. One to take places and another to get “real work” done. In the past that meant a spec’d out iMac and an 11″ MacBook Air. More recently, it’s been a work-issued 15″ MacBook Pro that stays plugged into my cinema display 99% of the time and a MacBook (One) when I travel. The new MacBook is certainly the most portable Mac I’ve ever owned, but it’s slow and lacks the screen space to do any UI intensive work.

Now that I have an iPad Pro, I’ve sold my MacBook and only touch my MacBook Pro when I have serious work to do. The iPad has replaced nearly everything I use my laptop for. That may not be so unbelievable. Lots of folks like Viticci have moved to an iOS only way of life. As I do more and more tasks on my phone, I’ve been tempted to try going iOS primarily, but I could never make that jump because I code for a living.

Until now.

I was screen sharing from my iPad to another machine on my local network, when it dawned on me how great it could be if this particular Mac were always available to me – even from outside my house. So, I splurged and ordered a datacenter-hosted Mac Mini from MacStadium. Ten minutes later I was connected to my new Mac in the cloud. And ten minutes after that, I had Xcode open and started testing the waters.

I’m using Screens.app to connect. And with a good internet connection there’s virtually no lag when screen sharing with my new Mac Mini. I’m able to run a native Mac resolution of 1920×1200 on my iPad in full screen. That gives me plenty of room to run Xcode and the iOS Simulator. With Apple’s new external keyboard, all of my usual Xcode and OS X keyboard shortcuts work just fine. And since coding is primarily a keyboard driven activity, my arm doesn’t get tired from reaching out and touching the screen like a designer’s might.

All in all I’m thrilled with my new setup. It gives me the simplicity and benefits of iOS, while still allowing me to do real work outside of the house or from the couch.

ipad-pro-xcode

Switching Email Providers

Earlier this year I switched (after 11 years) away from Gmail to an email address at my own domain hosted by FastMail.

I didn’t make this decision lightly. I knew changing email addresses could uproot my very online identity. But I was tired of the new direction Gmail’s interface was heading, and I also worried about the horror stories you occasionally hear when Google accidentally closes or locks someone out of their account. Email is precious to me – especially the history it contains – and I didn’t want to chance losing it.

I’ve used FastMail with my freelance email address for years and always been extremely satisfied. So choosing them to replace Gmail was a no-brainer.

With that introduction out of the way, what I’d really like to talk about are the four steps I took to ensure a smooth transition to my new email address.

First, I used FastMail’s built-in IMAP importer tool to transfer over all eleven years worth of Gmail into my new account. The process took about six hours – they emailed me when it was complete.

Then, I setup a forwarding rule in Gmail to forward all mail to my new address and archive a copy in Gmail.

Next, I created a smart folder in 1Password that searched for my Gmail address as the login for any website. Over the next few weeks, I updated a few website from that list each day with my new email until they were all switched over.

Finally, I setup a rule in FastMail to file all email that was forwarded from Gmail to a specific folder. From there I can see all the remaining websites and mailing lists that have my old email address and update the ones I care about.

Changing email addresses isn’t easy. But it’s certainly doable with a little planning and some work after the fact.

Connecting Amazon Alexa’s Todo’s with OmniFocus

Last week Amazon Alexa and IFTTT hooked up in a big way. They now have triggers that allow you to do things whenever you add an item to your Alexa to-do or shopping lists. This is awesome because now those items don’t have to live within Amazon’s ecosystem. With a little IFTTT tinkering you can quite easily have them shuttled over the net and into OmniFocus.

This means I can be cooking dinner and literally say out-loud “Alexa, add red pepper flakes to my shopping list.” Or “Alexa, remind me to schedule a cookout with Matthew.” And the next time I open OmniFocus, those tasks will be waiting for me. Awesome.

First, you’ll need to login to your IFTTT account and activate the “Amazon Alexa” channel.

Then, create a new recipe with a trigger of “If item added to your Shopping List”.

Next, for the action, send an email to yourself with the following settings…

Screen Shot 2015-07-31 at 7.20.21 PM

Finally, in your email provider’s settings, setup a rule for any email with the body “Alexa Todo” to forward to your secret OmniSyncServer’s email address. They’ll get the email with your to-do item as the subject and add it to OmniFocus.

Boom!

Don’t forget, your Alexa shopping list is separate from your Alexa to-do list. So do what we did above a second time for your to-do list to make sure you can add items to either list.

36 Hours With Amazon Echo

For whatever reason, Amazon deemed me worthy of receiving an Echo last week. After laying down my $99 and a quick, overnight shipment, it was on my doorstep Friday afternoon. And now, after giving it a whirl for thirty-six hours, I thought I’d write up my initial observations.

First of all, it’s bigger than I expected. When I first got it, I initially didn’t like the form factor, thinking I’d instead prefer something shorter and wider more like a speaker. But now that I’ve positioned it in a few different places in my kitchen, the skinnier, taller design makes sense. In a space constrained layout, Echo takes up very little surface area on my kitchen counter.

Setup was extremely simple. Just plug the Echo into power and then “download” the Amazon Echo app. I put “download” in quotes because that’s the phrasing Amazon uses in the setup material. But the app isn’t actually a native app from the App Store. It’s a mobile web app they encourage you to add to your home screen.

The mobile app walks you through connecting your Echo to wifi and your Amazon account in just a few minutes. After watching a three minute intro video, the device was ready for my first command. But more on that in a minute.

First I want to say that their mobile web app, while not bad, is one of those mobile apps that makes native app developers groan. Rather than being a responsive design that would work on any screen size, it’s specifically built for mobile. That includes a hamburger menu for accessing a side drawer of settings. It tries so hard to look like a native app, I just wish they had taken the time to build one if that’s what they’re aiming for. But, I do get why they went web app. It’s the fastest way to get one codebase on every platform. Maybe once Echo is more than a beta project, they’ll build a proper native controller.

While I would obviously prefer a native app, suffering through their web app isn’t a huge deal. The only real issue is since it runs in Mobile Safari, you’re required to be logged into your Amazon account. Not a big deal for me, but it is for my wife who is normally signed into her Amazon account, and therefore can’t access the Echo app. The solution? She simply just doesn’t use the app. A shame.

My first command was, predictably, “Alexa, what’s the weather tomorrow?” Echo thought for a second, it’s ring of lights glowing, and then promptly answered with a full forecast for the next day.

My wife and I have probably issued fifty or so commands over the last day and a half, and the response times after each question are completely on par with what I expect from Siri or Google Now.

The “always on” nature feels like a game changer – the natural progression of all these competing information services. Already, after just a day of use, it felt natural and seamless in a way that Siri never has. Without really thinking, I automatically said “Alexa, set a timer for 3 minutes” when making my morning coffee.

My wife laughed at the original Echo introduction video earlier this month. She was completely skeptical after such a bad experience with Siri the last few years. But, again, the seamlessness of it won her over. She’s issued more commands than I have.

How about voice recognition? Echo is able to hear and understand me speaking at a completely normal volume from an adjacent room and around a corner. A slightly louder, projecting voice was sufficient 40 feet away through an open doorway. The device is able to hear the wake-word “Alexa” very easily, even while the device itself is playing music. It pauses the music once it hears its name and waits for the rest of your command.

One difference between Echo and Siri is Apple’s assistant is much more conversational. There are times when Echo will answer a purposely non-answerable question with a fun reply, but not as often or with near as much breadth as Siri. Part of that, of course, is that Apple has had a few years and vastly more user interaction to tune Siri’s personality. It also might simply be due to Amazon purposefully not making Echo as human as Siri pretends to be.

When playing music at low volumes, Echo isn’t nearly as crisp and audible as my kitchen Sonos speaker. It sounds fine, but not great, at louder volumes. But with a sleeping baby in our house, low volumes are a must, and Echo just sounds muddled when listening to what I know are good audio recordings.

As luck would have it, earlier this year I uploaded all of my iTunes library into Amazon Music so it would be streamable on my Sonos. (Sonos famously doesn’t play nice with the Apple ecosystem.) Having 80 gigs of mp3s living in the cloud and available on Echo with a simple voice command is awesome.

I’m an Amazon Prime member, so, in theory, I have access to their “million song” library, but I haven’t tapped into that yet since my personal collection is so readily available. I have no idea how Amazon’s streaming library compares with Rdio or Spotify.

All of this music integration really just makes me yearn for a voice-controlled Sonos. With their speakers already situated throughout my house, it seems so natural for them to pivot into a full-on tech company capable of responding to my voice. Or at least partner with Google (Now) or Microsoft (Cortana) to make their tech available to an army of passionate Sonos users.

They other pipe dream Echo opens up is the possibility of an open API and/or official way to shuttle my reminder and shopping list data out of Amazon’s ecosystem and into whatever apps I happen to use for that type of data. It would also be amazing if one day Amazon enables developers via AWS to tap into their speech recognition and processing platform. Imagine if Amazon allowed you to stream voice audio to AWS, and they’d do the speech recognition and then further break down the input into verbs, actions, and nouns that could trigger webhooks within your infrastructure.

Anyway, I’m getting ahead of myself.

For $199 is Echo worth the price? Maybe. If you already have a Sonos in the room, possibly not. But for the Prime member price of $99, it was a no-brainer impulse buy that I’m very much enjoying.