MGM Files for Bankruptcy

http://www.comingsoon.net/news/movienews.php?id=71367

Wow, I never expected MGM to be in such deep water.  Sony swooped in to purchase MGM a few years ago, thus amassing a huge catalog of titles from Hollywood.  After all of its acquisitions, Sony has acquired the deepest, broadest catalog of movie titles anywhere in Hollywood.  To see MGM in bankruptcy likely means that its assets may not make it out alive, or be placed in a fire sale to lower bidders.  Big companies in Hollywood tend to protect their franchises pretty well.  Sure, there are tons of Tom & Jerry lunchboxes and Transformers movie tie-ins (like crappy video games), but I somehow trust a company like MGM to protect it better than a party that acquires the franchises in a fire sale.

~s

Television & Side Channel Interaction

One of the trickiest problems of bringing applications and tons of content to TV is that of search: finding the right content at the right time.  Xbox solved it through what is hopefully an intuitive, if sometimes dull, UI: Video marketplace, games marketplace, in-game content marketplaces, Netflix, etc. are menu-driven experiences that expose long lists of content to the user.  While it has nice animations and 3d perspective, it's still a list of titles.  PlayStation has similar designs, as does Apple on Apple TV.  Google, on the other hand, has focused on text search as a content discovery solution.  At first glance, that seems like a truly fresh approach to the problem.  After all, I find lots of content that way on the web.

However, as Google is finding out right now, the problem is in getting text onto the screen: who wants to sit with a keyboard on her lap while on the couch?  I just left my keyboard at work; I don't want to see another one when I get home to relax!  The new Sony Google TV has a horrendous remote that will not be winning over any critics.  Xbox also has a text entry attachment for its controllers, but it's completely optional and very much likely that 90% of the user base doesn't know it exists (let alone need it to enter text).

What most of these system implementers are missing is that many users (especially ones likely to get a fancy Internet TV) already have great text entry devices in their hands: a smartphone.  There is only one reason not to leverage that fact: accessories.  For Xbox (and other traditional hardware manufacturers), accessories are a source of major profit, overcoming the console's loss and thus making money for the venture as a whole.  However, these accessories attach at about 5% in successful scenarios.  That means 95% of your user base doesn't see the value of the scenario that you're solving through the accessory.  One might wonder why you went ahead and developed the code behind that scenario in the first place, but that's a whole other discussion.  My point is that, while accessories are profitable, your ultimate goal should be to enable great scenarios and accessories aren't helping 95% of your users.

But, let's take the theory a step further.  Is there a reason to use the smartphone as a text entry device?  Not really.  What I truly want to do is manipulate the content on the big screen.  The smartphone (or tablet?) in front of me is already efficient at fine-grained interactions like search & browse.  The TV is really good at one thing: throwing sound & video at me (and those around me) with the highest fidelity available.  The best thing technology can do in this situation is get out of the way, and the best way for it to do that is to put everything right in front of the user on a smaller device.

I want to call this "side channel interaction," and it's uniquely appropriate for a living room situation.  While the interaction challenges of getting text to the screen are daunting, the social aspect of TV means that side channel interaction is also more appropriate.  If you could browse what's on TV without disturbing everyone else, that's interesting.  If you all could play Jeopardy! without having to scream out the answer before people had time to think, that's interesting.  If you could collaboratively edit the video from your vacation without crowding around a desktop, that's interesting.  Imagine being able to get rid of TV advertising entirely by using the side channel to monetize.  Each person in the room could get a personalized ad through notifications that sync with the video content.

Apple has started to use the side channel with AirPlay, which enables you to 'flick' content from your phone to the TV screen through Apple TV.  What enabled them to build that was their investment in Bonjour networking a very long time ago.  Other operating systems (like Windows and Linux) have implementations of Bonjour, but AirPlay has built-in cryptographic keys that prevent just anyone from being a part of the AirPlay ecosystem.

Even if you're not Apple, it's possible have side channel interaction with TV.  This kind of technology is super cool, but it's a workaround for what is really needed: smart televisions that know the magic is not on the big screen, but actually on the screen right in front of you.  As Apple has shown with Apple TV, the smarts don't have to be in the TV itself, either.

In my opinion, Xbox, PlayStation, and Google all have a #fail on their hands until they can interact with mobile devices seamlessly.  Xbox would tell you Kinect will usher in an era of natural user interfaces.  While I love Kinect and the people that made the impossible happen (I even got to work on the project just a little bit) , I think laziness will play a trump card: when I can flick my finger on my personal device, why make larger movements with my hands?

~s

A different kind of Netflix app

Over the past many months, I've been working on a Netflix iPad app in my available time.  I originally had the idea for what I'm trying to achieve with my app since before iPad was released, though I never had time to execute on it until recently.  As with many of my personal projects, this one stemmed from personal frustrations I had with other Netflix apps on iPad.  These apps all share some key problems around design, information architecture, and performance.

There are only a few apps for Netflix available on iPad (if you don't count the iPhone apps that run on the device).  The two prominent ones that let you edit your queue and add new content to it are MovieBuddy and iPhlixHD.  I was instantly turned off by MovieBuddy's gaudy animated graphics and slow performance in loading recommendations.  iPhlixHD, on the other hand, is a very slick app, but its paradigm basically transposes the Netflix API into a menu system.  This hides information the user could use up front to determine how interesting the content is.  Specifically, runtime, release date, availability, and star ratings.

When it comes to performance, many apps lack a cache of information even though the Netflix API allows for it.  This greatly speeds up boot time of the app as well as resource loading.  Also, such caching prevents the reload of detailed information about resources when the user moves back from a detail view and then reloads it just afterward.

I set out to solve these problems for myself by writing a fairly thorough application.  My previous posts about iOS programming patterns came about largely due to this app.  Here is a breakdown of how I solved these problems: In design, the app is very visual: you see large, friendly box art rather than a list of titles.  In information architecture, the pieces of information you really use to make decisions about what to watch is bubbled up to the surface, rather than buried below a tap.  There are also lots of performance related features in the software to make sure the app boots fast and stays fast throughout.

I'm about to launch a beta of the application.  I'd love to get feedback.  If you'd like to participate, please fill out this survey: http://www.tinyurl.com/syfterbeta

~s

iOS: HTTP Connection Management

In my previous post, I detailed a common software pattern that anyone writing media-rich iOS apps probably runs into, so I was surprised Apple hadn't provided a generic implementation in the SDK.  Another pattern I'm running into is a prioritized and throttled HTTP connection pool.

As you navigate through a media-rich application that loads resources, the resources need to be streamed in to support the user's working set.  However, it's possible the user transitions among working sets very quickly, invaliding the previous working set before all of its resources are even loaded.  For example, you can imagine a photo gallery app that has one continuous view of a large photo album.  The user may scroll through lots of photos before they are even loaded in an effort to get to the end of the album.

While the cache explained in my previous post would help keep everything stable, the cache still needs to go out to storage to retrieve the photos.  In order to make things simple, developers typically write code that serially retrieves required resources, disregarding whether they are in the working set anymore.  If the cache is still interested, it will store the retrieved data.  Otherwise, it will drop it on the floor.  However, this can result in a poor user experience in the scenario above, where the user must wait for all the previously requested images to load before the ones at the bottom do.  The challenge is to make the code better to not retrieve resources once they are outside the working set.

In more complex scenarios, the application may want to lazily load resources in the background while the user works on the current working set.  However, this can once again create the same poor experience if the user must wait for these background resources to complete loading before resources in the user's working set are loaded.  Even worse, these background resources may not even be visible, so the user has no idea why she is waiting!

Hence the need for prioritizing HTTP connections.  To do this, one needs to assign a relative priority to each connection and place it in a pool of requests.  Once in the pool, some logic needs to decide the right connection to execute.  You can easily add more complexity by allowing multiple connections in flight.  The generic pattern here is that of a scheduler: "objects" become "ready" and are then "scheduled" for "execution" once "compute resources" are available.  The "scheduling" of an object can be simple or complex: operating systems are known for having algorithms that attempt to fairly give compute resources to lots of objects with differing priorities.  Validating the fairness of a scheduling algorithm can get pretty hairy.  For our problem, simpler solutions are typically good enough.

An orthogonal, but also related, problem one runs into when developing apps that have a lot of HTTP traffic is that of throttling.  Lots of web services want clients to throttle their connections in an effort to curtail abusive usage.  However, the HTTP library on iOS doesn't have a way to throttle a bunch of connections to the same HTTP server.  Most web services only allow a certain number of connections every period of time.  However, in an age where multiple connections to each server are the norm, that could mean several things.  Conservatively, an HTTP library could provide a static period between the closure of one connection and the opening of the next to the same server.

I've solved these problems within the context of the app that I'm currently writing.  Currently, it's not the most general solution as it is tailored to the MPOAuth connection library.  Hopefully I will get some time to refactor it and throw it up on github.  Let me know if this would be useful to you!

~s