How do you deal with incoming calls or similar disturbances while driving the bicycle? Today a bicyclist in front of me demoed his reaction, taking up his phone and trying to interact with it while almost creating an accident with other bicyclists. More proper would be to ignore it, and if you felt it was important, find a place to stop and check it.
Glances provide a second option: you can have a glance on your watch. It’s a disturbance, but better than fumbling one-handed with your phone. But there can be no interaction: your other hand is firmly on the steering wheel and the hand on the watch arm cannot interact with it. So there all the sudden the interaction that is making a short glance into a long glance comes in. Looking at the watch a bit longer can provide additional information about the disturbance, making it easyer to make the call of whether to stop and take care of it, or keep on cycling.
I’m curious to see how long a long glance is, how well glances and bike-riding or car-driving works together (probably still a better idea to focus on what you were doing), and how we can use touch-less interactions initiated by a glance.
When the iPhone launched, and for the first versions of the iOS SDK, an app was a bundle, a directory with metadata if you will, with the suffix .app. System- and 3rd party apps were all contained in each bundle. No app was in multiple bundles, and no bundle was multiple apps.
Already I’m simplifying, because there was one more thing to the app bundle: optionally, you could add a settings bundle, which would load into Settings.app and allow the user to change the settings from your app, from outside your app.
Since then, things have become more complicated. To start with, you had universal apps: it was still an app, but now it had the complexity of targetting two platforms, the iPhone and the iPad.
Following that, we got extensions, which could (and in my opinion should) be seen as apps in their own right: little plugins, if you will, to the share panel, to the today panel, to the keyboard panel, and to the photo editor. Many apps that used to be limited to living in their own app, now live more hosted in other apps than in their own. But still they are packaged together with “the app”. And sold together. Being in the same bundle you cannot delete one without the other also being deleted.
With the Apple Watch, we get three more extensions: glance, notification (with short glance and long glance), and the watch app. These are supposed to provide windows into your app, but I think that again we will find that for many apps, this is more the app and the main iOS app is. And again, if you’ll delete the main app, these extensions will also disappear from your Apple Watch.
I’m working on a hobby project at the moment, and I think it’s a good illustration of what is bundled with the app now:
What worries me the most is that the main app is required. And if it’s not the centerpiece, then people would not be sure why they would pay for it. That’s very different to me from if people could buy a kick-ass keyboard, share extension, photo filter, Apple Watch app or similar on its own.
The other thing that worries me is what will pay for all these extensions. As you could see from my hobby project, it’s a whole lot of extra work that comes on top, and that honestly is more or less a basic requirement. If my app were a paid app, I don’t think people would accept an in-app purchase to allow the Apple Watch extensions to be made available. I don’t even think Apple would allow me to submit an app with three extensions that by default don’t do anything. And making something that doesn’t do much and then open up functionality seems like something that would be just as bad. Also, showing ads on this little constrained device would probably not sit right with many people as well: you’ve just spent, let’s say $500 on this piece of jewelry, and now it’s plastered with ads most of the time. Nope, that’s no good.
I think the “everything is in the ‘app’ bundle” approach we have now does a lot to cement, or even make worse, the hard situation for people to make money of their apps alone, and I really think this is unfortunate. I would love for this ecosystem to become something people can make a good living of by making good things. If people can make a living selling good wax candles, why should they not be able to make a living selling great digital tools? For now, the only good business I know is for the wax candeleer to use the apps as an entry into his candle-shop to sell more, and perhaps more custom, wax cendles.
For WatchKit extensions, I’ve used the parallell of puppeteering: the app makes the extension do things, but the extension itself has close to little logic. Others use the parallell of the browser, and I do like this. Like the browse, the iPhone serves up state in a context upon a remote screen, the app can project state in a context upon a extension (remote view controller).
Continuing on yesterdays post with the idea that hosting extensions (I really prefer the name remote view controllers like we saw in iOS 6) is something that can be opened up, let’s take it to the extreme and say it was proposed as an open standard, much like HTTP. All the sudden, this could become the new browser, allowing any app to project their content into it.
That would be absolutely awesome! Think about it all the opportunities for device integrations! That could possibly allow me to treat a 5K iMac as a dumb terminal, having all my personal information, documents and state on my phone, and interact with it through a beautiful desktop experience. No compromises, all integration! At the same time, you’d have the Apple Watch integration, with extensions being able to run of the same iMac integrating with it when they are near. Having third parties such as keyboard makers being able to integrate a little screen, or Google Glass taking it to the next level, even having other platforms participate together, making for a seemless integration. This would be a dream of devices coming together, much like we experienced it when having disks formatted for Mac, Amiga or PC was no longer a thing. Data was just data. Now state in a context can be state across a shared context, optimized for the best interaction on each device.
Having thought that thought now, please Apple, propose extensions as an open standard. This would make our devices so much more integrated, redusing hassle and really delighting. Would this be a competetive advantage for Apple? I’d argue yes. Sure, not for people like me who get the whole stack already, but for people like my mother who has a PC, an Android phone and an iPad. She’d have such an amazing upgrade of her experience, that I’m sure she’d be much more likely to buy again from Apple if she didn’t have to consider what ecosystem her new device would integrate with and what system it would not.
Back in 2012 I wrote about the private framework around Remote View Controllers, hinting that developers should keep this in front of their mind. Through iOS 8 this became extensions, and with the Apple Watch, this is the main interface for Apple Watch, at least until WWDC 2015.
On the Debug podcast episode 57 Guy & co mentioned that Apple TV is a prime target for extensions, and I agree, but let’s think it over. For my case, I have a kid and no time to watch TV. I use the TV mainly to stream music just like I would with an Airport Express (why, oh why, is this not in the Airport Extreme and Time Capsules! And why, oh why, has the Apple TV and Airport Express not merged yet? And neither have they been updated in forever. Sigh….). What is a killer feature here, and I really did not expect this to be anything of interest when I bought it, is that if I turn the TV on while playing music, it will also show the most recent photos I’ve taken with my iPhone, which means up-to-date photos of my kid. That’s really awesome, and makes both my wife and I stop, look and enjoy. Oh, and my boy likes it too. :-) The second big thing is AirPlay, being able to share my screen on the TV. And this is the niche that the Apple TV Extensions could carve out a niche in, as I see it.
Take Adobe Lightroom. I love that they have brought it to the iPad, as I have my 160k photos stored and arranged there since around Lightroom 2.0. There is tons I would want them to work on in the app, and of course, showing photos on the TV is a part of that. At the moment, my only choice is AirPlay, making what I do and what my parents see while visiting the same thing. We´ve had presentation tools like KeyNote and PowerPoint give us the option of separate screen output for ages, so this seems klunky today.
But when would the screen extension fire off? With extensions on your phone or mac it’s easy, just fire when it seems appropriate. With the Apple Watch, if you’re it’s close by, you’re wearing it and the phone can show stuff there. But with the AppleTV, most time when I’m nearby the TV is off, or it is not the source being displayed. So there needs to be some mechanism of alerting the app when it would be proper to use it, or for the app to request to be able to use this display.
That got me thinking, where else would I like content to be displayed? Sure, Google has shown that glasses is a valid target, HTC has shown the same for phone covers. I can imagine a lot of places, and not all places are places where I think Apple would get into. Would it be worth it to Apple to allow third parties hosting remote view controllers (sorry Apple Marketing, extensions). That would for instance open up the possibility that my window manufacturer could allow apps to project something into my window. Or refrigerator. Or whatever. Would that be a thing for Apple, just like with MFi, and would that give me as a customer value?
Yesterday, WatchKit became available for us developers, and as far I can tell, open for the entire world to see, which is a first for Apple at least since the iPhone introduction. I thought it’d be good to share my first impressions.
I remember the second day the first official iPhone SDK had entered beta. It was new. It was fresh. At the same time it seemed well thought out. It was opinionated. It laid a path of where we would go. And it was a very nice framework on top of BSD. I was thrilled!
WatchKit doesn’t quite feel this way. The first thing I noticed that “the way it’s done” in iOS is not the same on WatchKit. For instance, there is no autolayout. Autolayout is THE way to do dynamic layout, Apple has been pushing it hard. Yet it’s nowhere to be found on WatchKit. It’s older brother, springs and struts, is also missing in action. Instead we have a swing-like layout system of stacked elements, either stacked horizontally or vertically. Heights and widths are explicit, yet the watch comes right of the bat in two screen resolutions. So there’s the interface dynamicism out the window.
Despite Apple pushing Swift heavily, the first examples they choose to show, being the examples in the demo video at http://developer.apple.com/watchkit, are all Objective C. Sure, they provide demos in both Objective C and Swift, but I’m surprised they’re not keeping the message focussed.
Controls are not UIViews, and they don’t seem to be layer backed. This means no Core Animation, and the result is that all animations have to be a series of pre-rendered images, rather than smooth images set up to be run on the fly. And yes, that means pre-rendered animations in two sizes, for 38mm and 42mm.
Is it running BSD? No idea. This is probably only important to my nostalgia as I can’t remember having dived directly to the BSD layer the last year. So it’s probably just my security blanket, but I like it being there and being available to my native app. But the WatchKit apps are not native. They are resources pushed on the phone and managed by the phone. All the logic is on the phone, the watch will only show resources already there, plus a tiny little bit of content generated on the fly by the phone and pushed over via what I expect is Bluetooth. So it’s not native, it’s more like a puppet theatre. That’s why map components in our apps cannot be dynamic. I’m sure Apple’s apps are native, so we’re really back to the iPhoneOS 1.1 situation where we were only allowed to make webapps. I hope Apples apps have the BSD layer available. But I would very much like to have that too. Perhaps with version 2.0?
But yes, we got a lot more than we expected. What I and many with me expected was the ability to make “glances”, but we also got notifications and we got these puppet-theatre apps. That gives us an awful lot of tools to begin with. Truth be told I can probably implement all the ideas I had so far with this. It’s going to need more work than I expected with the resolution-dependent layout, but much can still be done and I expect we’ll see a bunch of awesome apps.
The limitations, though, make me even more curious as to what kind of computer the S1 chip really is.
One thing I found really interesting is the UITableView replacement. UITableView is an old beast that’s been our bread and butter for a very long time. I look forward to giving it a run and see what kind of abuse it can withstand. With no scroll views, this one looks to be a target for an awful lot of experimentation in the months to come.