2007/10/27

I wish I had an Apple TV

About a year ago Apple pre-announced a product that is now called the Apple TV. It simply provides an interface through your TV to the media that exists on your computer. Movies and music on your computer can be displayed on your TV. Your computer doesn’t need to be hooked up right next to the TV with that adapter that you always lose.

When it was initially announced, I loved the idea of this product. I listen to my music through an Airport Express, which streams music playing in iTunes from my computer to my stereo. I buy my music predominantly through iTunes and the whole system works very well. Extending that metaphor to video is natural. What makes video on the Apple TV better than music is serialised TV shows through the iTunes Store. You buy the whole series and they’re transparently downloaded to your computer the day they’re broadcast on TV.

No hassle with ads or having to schedule TV at an exact time every week. A TiVo does this too (but we can’t get it in Australia) but, crucially, a TiVo can’t record what isn’t shown on TV. This also means back catalogues, currently the domain of DVD sales. Bittorrent will give you all this and more, but it’s illegal and not as easy to use as iTunes. (Remember, I’m talking about the Apple TV as a product to market to people.)

To summarise how the Apple TV fits in with Apple’s product lineup, it’s essentially a ways to consume content from the iTunes Store. If you rip your DVDs onto your computer, that’s fine too but the raison d’etre is to make people buy series and movies through iTunes.

The kicker of all this is that I don’t have an Apple TV. For one reason only: it doesn’t have a composite video output. That’s the yellow cable in the red/yellow/white trio that used to be the standard for most video/audio connections. My TV is pretty damn big and old, and it only supports composite video. So that rules me out from the get go. How many other people are in the same situation? I guess according to Apple’s market research, not that many.

I’ve seen slews of requests for more features for the Apple TV. They’re all pretty similar to the recent article “Apple TV future” at Apple Pulse. Recording and DVD drive, so they say. That’ll make people stand up and buy this thing. Bollocks. I want an Apple TV so I can avoid regular broadcast TV and eventually ditch my DVD collection.

I don’t think something like the Apple TV will be a big seller for a long time. How many Airport Express units were sold on the basis of their wireless music streaming? Over the next five years I think there is a market to be created, however. Don’t forget that the iPod took that long to become a monster, after all.

The Apple TV in conjunction with the iTunes Store is a platform with great potential (either one on their own requires an equivalent to the other). I’d be happy to align my allegiances to any other company that was doing the same sort of thing, too. I’ve got a soft spot for Apple, though, and there’s no other big company that is approaching it like they are (as far as I know). It’s a travesty that many of the content owners aren’t going along for the ride, but I desperately hope that this is just a matter of time.

Revisiting code

Michael Mccracken said it first:

Once you get a piece of code to the point where you believe it works - it’s passing its tests - go back over it and edit it. That is, go back and edit it for clarity, flow, and style. Just as if it were an essay.

Les Orchard at 0xDECAFBAD (love that site name) said it better:

Ugly code kills motivation and comprehension

There’s a curious tension between the “it works; ship it” mentality and those that say “this code isn’t perfect; it isn’t ready”. I don’t have a background in computer science, so the code that I ship tends to be the “it works” variety; over the months and years I’ll see ways to do things better (often through bugs cropping up that shouldn’t have happened in the first place) and the code base improves piecewise.

But one thing I have noticed is that if I write really nice documentation (of both the user interface and the code itself) I’m more inclined to go back in and start messing around with little tune-ups. LaTeX’s docstrip is ideal for this because you can freely mix up code and documentation (and even present the code asynchronously). Some of my time might be wasted by choosing the font that my code is presented in, and nicely explaining my algorithms with carefully typeset figures and tables but the up-shot is that it makes things a lot more accessible for someone to edit in the future. And that includes me.

(Oh, when I say that I don’t mean it’s a good idea to program in LaTeX; it’s rather hideous, actually. But docstrip doesn’t have to be used for LaTeX programming and one day I’ll experiment with writing other code with it.)

2007/10/23

Technical libraries for technical times

I hope that libraries of technical information are going to be unrecognisable in the future. And I hope that information is going to become globalised and centralised. This post marks the first solid thoughts I’ve been able to put together on some ideas I’ve been vaguely musing over in the last few months, I guess. The themes of these ideas are about how we are taught and how we learn and how we research. Obviously, my viewpoint is going to be very biased to my experiences, but I hope that my ideas here can eventually be generalised.

There are several projects around the place to create open centres of learning, an initiative that I strongly support. Unfortunately, the problem so far seems to be that it’s an incredible task to put even a semester’s worth of learning material together, and few people are creating content for these websites. Examples are Wikiversity, Wikibooks, The Open University and Connexions. Browsing through these websites reveals extraordinary nuggets of information completely out of context, and shows how very far we have to go before it’s possible to access learning materials for an entire discipline like Mechanical Engineering (to use something I’m familiar with).

(Note that these projects vary compared to Google Books or OpenLibrary or even pioneer Project Gutenberg, which all simply collect books without linking them together nor providing facilities to create new or edit current books. Both kinds of project have their place.)

But there’s more to the problem than getting a thousand engineers to write a thousand books and calling it a day. When we say we want open content, that’s not enough. The content has be written for a purpose and needs to be written differently depending what it’s being written for. If we could imagine the ideal case where everything we wanted to know was linked though a giant library, how would we be using that library? I break it down into three categories: learning, reference, and research.

The similarities between learning and reference is much greater than between reference and research. (Indeed, I’m not even sure about the “research” layer at this stage. More on that later.) Much of their content could even overlap. But whereas a reference book will be explicit and terse, a learning book will have analogies and examples and tutorials and may very well skip the detail that makes a reference book what it is (dry and boring — no, I jest).

But remember that we’re no longer talking about books any more. This information would exist in “blobs” in the library to be chained together in whichever order made sense for the application. Control theory is widely applicable over at least mechanical, electrical, and chemical engineering, but the teaching methods between them can often vary considerably. Similarly for the more fundamental maths that underpins the more rigourous engineering subjects.

And this chaining, I feel, is one of the fundamental advantages of a central store of information. Places like Wikiversity might have modules that are related to each other, but the best they can hope for is a cross-reference to link them together. It’s impossible to reduce science into such small pieces of “things to know” that they can be placed in a linear fashion and be absorbed all at once. There are branches, dead-ends, intersections, and circular loops that defy any canonical reference. For different applications, different references need to be written. By chaining blobs together, not only can material be re-used efficiently, but consistent terminology can be used across all scientific disciplines.

Greater abstractions can only be built on top of steady foundations, and as more and more becomes known about the world we’re approaching the limits of what we can learn in the four or five years we’re given as graduate researchers. And this is a where that “research layer” I spoke of earlier comes in. Every new research student, guided or not, will follow a literature trail in the subject of their thesis. Their evolving bibliographic database is a representation of the “information space” their have mapped by the research they’ve managed to find, and they’ll proceed to carve out their own little niche in that space.

I’ve observed in my own research that my literature search is never complete. And it’s obvious reading others’ papers that theirs never is either when you find similar papers published years apart. Sometimes all I want to do is catalogue as much research as I can find, and this is where the seeds come from for an idea of a framework for documenting ongoing progress. Why should two researchers working on opposite sides of the world have to replicate each other’s journeys into finding work in their field that’s been done years before?

I’d like to see “literature review” as a giant web of cross-references which differs from a reference library in that old work won’t be forgotten, exactly, just hidden away behind the newer work that encompasses it. When a new research book is written, it can cover years of work in a field for which those papers are now, in a sense, obsolete. This resource would allow “forward linking” for random papers that you stumble across so that you can easily follow what research might have come out from that work. And if none — is there scope for more research?


All of these ideas have been glummed together over the last while as I’ve had time to tack them together. The concepts are muddy in my head and I’m not even sure how feasible this project is. Perhaps it’s impossible. Probably it’s impossible, at least today. I’ve got many more ideas and details in my head but I’ll let them ferment for a little longer.