2007/09/28

My first steps with open source licenses (& LaTeX)

I’ve finally got around to learn a little bit about open source licences the other day. The whole premise seems easy enough: I write this code and don’t put restrictions on it for other peoples’ use. But the devil’s in the details, and there was a lot to get my head around at first. This is a short summary of what I’ve learnt (or, at least, what I think I’ve learnt).

First thing’s first: it’s a Bad Idea to make code public that doesn’t have a licence. You will be legally responsible, theoretically, for any bad things that happen resulting from others using that code. Secondly, it’s Not Possible to release code “into the public domain”, although many people claim to do just that in an attempt to obviate their copyright responsibilities. Copyright is automatically assigned and it’s legally murky ground to attempt to get around that (and varies from country to country in how successful you will be in that attempt).

It’s easy to say “well, my code will never be used by anyone else anyway, so it doesn’t matter if I don’t release it with a copyright licence” but that’s a little short-sighted. It wouldn’t be public if you didn’t think that anyone would find it useful, and if someone wants to re-use what you’ve written, the absence of a licence will prevent them from doing so, even if you’d like them to in principle. Furthermore, the absence of a warranty (again, theoretically) could get you in hot water if things turn out poorly due an error on your part. So free code must be licensed.

The question is then “which licence to use?”. You wouldn’t think this would be such a problem, but there’re heaps to chose from and many of them are quite similar. Making a good choice without knowing the details is more a matter of luck than anything else. Over at Google Code Project Hosting, they’re trying really hard to restrict the number of open source licences around by only offering a small number of choices for the projects they host; a laudable goal. And yet their list is still eight deep. Even if you want people to use your code essentially without restriction, there are three to choose from: the BSD, MIT, & Apache licences. Which to choose even in this simple case? I’ll discuss their differences five paragraphs hence.

There are three broad classes of open source licence that can be summed up by three specific “best practice” ones: the GNU General Public Licence (GPL), the Lesser GPL (LGPL), and the Apache Licence. The GPL is probably the most well known and popular free software licence: it requires that the work be distributed with its source code and stipulates that derivative works also follow the GPL. This ensures freedom at all costs, with the expense of flexibility; you’ll never see GPL code turn up inside proprietary products (illegal exceptions notwithstanding.

The LGPL was written to allow proprietary software to use the functionality of GPL-like free software without having to open the entire product. A library with the LGPL licence can be used in a closed product without having to open the source for the whole project. I won’t really consider this class of licence too much here (the Mozilla Public Licence is similar). Suffice it to say that it’s a slightly more liberal license than the GPL for certain types of software.

Finally, the Apache license is a model example of a license that lets you do pretty much anything you like with the code. Not only is the code free, but it can be re-used where-ever you like, under whatever license you like. There’s an obvious tension between a “copyleft” license like the GPL and an Apache-like license: for the former the code is free and will always be free; for the latter, the code is free but someone might take it, improve it, and lock it up — which doesn’t help you any but you do allow it.

I’m in the Apache licence camp more than GPL: I’d prefer my code to be maximally useful to as many people as possible than restrict its use in order to ensure that it will “always be free”. Of course, if everyone used the GPL then that wouldn’t matter, but that’s simply not going to happen. I might change my tune if my coding were more directly useable in commercial products, however. I can certainly see the idealistic appeal of the GPL. (While I’m on the matter, the GPL recently had some major changes made for v3.0, and it’s apparently rather controversial. I don’t understand the whole matter at this stage so I’ll leave the intricacies of this licence for another time.)


If you don’t want to choose the GPL for similar reasons to me, let’s revisit the question “which licence to choose?” and discuss the differences between the various (popular) Apache-like licences. The distinctions are subtle but there are valid reasons for choosing between them. As mentioned, the big three are the BSD, MIT, and Apache licences, where the latter is a later and more formal extension of the ideas in the other two.

The MIT licence is the most simple: you can do whatever you like to the code (distribute, sell, modify, relicense), provided that “The [ … ] copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.” Even the text of the licence itself can be changed.

The BSD licence adds one condition on top: “Neither the name of the [organization] nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.” Sounds sensible to me.

The Apache licence is the one that I’ve been implicitly endorsing when I used it as the “best case” example in the beginning for these “no restriction” free licenses. I’m really taking a cue from Greg Stein of Google who says:

That is one of the reasons that Google chooses the Apache License (2.0) as the default for the software it open-sources. It is permissive like BSD, but (unlike BSD) actually happens to mention the rights under copyright law and gives you a license under those rights. In other words, it actually knows what it is doing unlike some of the other permissive licenses.

(Not necessarily an un-biased comment, I have to admit; he’s also Chairman of the Apache Foundation.) The additional terms in the Apache licence (over BSD & MIT) require changes made in modified works to be prominently marked as such. I like to think of such measures as “enforced politeness” — it’s not like people won’t be doing this in general anyway. I believe that the Apache licence itself cannot be altered, but I don’t actually know for sure.


Finally, the reason I got into all of this is from the various bits and pieces of LaTeX code that I have written. And they’re licensed under the LaTeX Project Public Licence (LPPL), which is different again to those I’ve already discussed above. It’s pretty interesting, and I think it deserves a little attention. (Link disclaimer above: at time of writing some of that Wikipedia page was written by me.)

Because LaTeX code almost always defines a document syntax (it’s a programming language of communication, essentially), it’s pretty important that things don’t change meaning without warning. I want a document that is typeset on my machine to be exactly the same on your machine under reasonably similar circumstances. While LaTeX is free to modify and distribute, they don’t allow people to take the code and alter it without potential users knowing that it’s not canonical. This follows the original licence of TeX itself, probably the earliest piece of free software still in use. (According to Wikipedia, Emacs was first released in 1984; development on TeX started in 1977 but the version most similar to the one we know today was released in 1982.)

To try and formalise TeX’s licence, the LPPL allows modification and distribution only under the proviso that the user is made well aware that they’re using a modification to that work. This is usually done with a change in name, but technically speaking minimal conformance could be achieved (and strongly frowned upon) simply by printing out a message on the console stating that the package you’ve loaded isn’t the original version. A good example is a conference proceedings document class, for which you certainly don’t want someone changing the margins or fonts without calling it something different!

So if only the copyright holder is allowed to make changes to the code without changing the name of the package, what happens if the original author loses interest in or can no longer work on the project? The LPPL also defines the concept of a project “maintainer”, who may make public changes to the work with the authority of the copyright holder. You can become a maintainer of a project either by being bestowed the title or (when the previous maintainer cannot be contacted) by announcing publicly your intent to start maintaining the code; maintainership falls to you after three months if your claim is uncontested.

None of this changes the problem of ensuring backwards compatibility in packages, but it goes a long way to ensure that documents remain portable into the foreseeable future. This is a laudable goal when compared to the philosophy of “closed source” document programs like Word Perfect or Microsoft Word, whose old files are sometimes now unreadable.


Now, in my explanations above I have omitted many specifics in order to try and get the ideas across about the licences I was talking about. Diving too deep into the legalese makes it impossible to get a broad picture of each licence to be able to compare them. Obviously, I am not a lawyer and my terminology could be improved but I hope that I got the gist across. (Also, I hope that I’ve understood it correctly myself!)

I’m using the British spelling for licence and license here (for noun and verb respectively; cf. practice & practise — I remember these rules by the mnemonic “ice is a noun”). When I talk about licences above, I’m referring to their current versions: 3-clause BSD, 2-clause MIT, LPPL v1.3c, Apache v2. One day I might understand the difference between GPL v2 and v3, but not at the moment.

2007/08/18

‘Perfume’ by Patrick Süskind (1986)

I have been receiving suggestions to read Perfume for about ten years now, I reckon. Recently it was published as a Penguin Red Classic and I grabbed a cheap copy to add to my huge list of books to read. I actually managed to fit it in on a flight between Brisbane and Cairns a few weeks ago on my way to the ICSV14 conference. That’s right. It was so damned good I devoured the whole thing in one go. To be fair, I read most of it on the flight and then finished it off when I got to the hostel. As far as I can remember, it’s the longest book (296 pages paperback) I’ve read in one sitting.

I loved this book. And for totally different reasons than I often love books. It didn’t contain any characters that I found particularly likeable nor whom I could even empathise with. And the actions of the characters were never noble nor life-affirming. This isn’t one of those books like ‘Peter Camenzind’ (Hermann Hesse, 1904; my thoughts forthcoming when I re-read it one day) where the life of an everyman unfolds before your eyes and connects you with humanity as their character advances spiritually through life. For example. Despite all of this, there is an uplifting catharsis that arises quite unexpectedly (to me), which is quite unique for the genre that — on the surface — this book appears to fall into.

But the world that Süskind creates is simply amazing, and the construction of the narrative is simple and clever. There are no loose ends and no logical gaps in the story. In short, a ‘perfect’ novel. And while I wish I was the kind of person who could untangle the themes within an elucidate them now, I must be content to bask in them. I guess that’s why I’m trying to write here, so that I learn to express my own reactions to things.

I was talking with Toni the other day about the movie that has recently been made on this book. I haven’t heard a single good reaction about the movie, and I’m fascinated by the fact that the movie was even made. The book spends much of its time in a world that can’t really be shown on film, and so while the superficial ‘action’ of the story could obviously be shown, I’m baffled by how the motivation of essentially the only character could be portrayed. Or how someone could even try and approach the problem. So I would really like to see how the book was adapted.

Now, I just spent a couple of minutes reading the reviews at IMDB and the reactions there seem rather good. So that gives me more motivation to actually check it out. Because I do love movies, after all. Stay tuned.

2007/08/12

‘Places like this’ by Architecture in Helsinki (2007)

I don’t write much about music, because I don’t really know how to put the words together. Many reviews I read assume that the reader is familiar with the music itself, which isn’t necessary what I expect from a review.

I first became a fan of Architecture in Helsinki after being gifted with their debut album, ‘Fingers crossed’, shortly before seeing them live. This must have been 2003, I think. Until recently, their band had nine musicians who all played different instruments to one degree or another. Their first gig I saw in Adelaide had them crammed onto a tiny stage in the Jade Monkey with many more instruments than band members and having hardly space to move let alone swap instruments halfway through songs. They are certainly an eclectic lot. I really don’t know how to describe their music. Lots of energy and lots of instruments almost chaotically thrown together, with vocals primarily provided (often in falsetto) by their lead Cameron and taken over (and stolen) by partner-in-crime Kelly.

Their first album, which was a sweet, smoothly-produced and catchy number, and some time later, their second album ‘In case we die’ was a much bolder expression of their energetic and unusual sound, I guess. That album was less one that you could stick on as background music but captured better who they were as a band.

Around the same time, they toured Europe and America and I’m guessing become a lot more popular. (Well, Sven-S. Porst likes them at least. That’s my one and only data point for popularity outside Australia!) Since then, they lost two of their musicians who had more of a classical instrument bent, and just released a third album, ‘Places like this’. And it’s my favourite album so far. At a touch over 30 minutes I would like it to be a song or two longer. It continues the trend started in ‘In case we die’ of louder, punchier sounds. Cameron is crooning less falsetto and living it up a bit more. The songs are more catchy, the enthusiasm more unbridled, and the album more consistent. With a solid touring history now they’re much more guitar-based in concert and they’ve never been better.

I love you, Architecture in Helsinki.

2007/08/11

Out of practice

When I’m in that state of having nothing to read but not enough motivation to do work, I really need to spend the time writing rather than searching for more reading. For example, I just read in the New Yorker that there olive oil industry is rife with counterfeit oil; it is often cut with sunflower and soya oil (and sometimes treated to mask the flavour of the offending additive) for inevitable greater profits. That makes me wary about oil, I guess. Oh well; in cooking, I can hardly taste the difference anyway.

Now, the New Yorker is great. I bought a paper copy of it in an airport a few weeks ago to gauge whether I prefer it in print to online, and online wins hands down. It’s a whole magazine of current affairs and articles of interest, which can vary from fascinating to completely off-wavelength. Buying the print version gives you a good mix of both. But online, it’s easy to skip the chaff, and this makes it a much more valuable reference. Of course, I’m a huge fan of keeping articles I like in softcopy for future reference, and hardcopy just clutters up a garage in the end.

But undirected reading is hazardous to my time. I’ve hardly got time to do the dishes these days, so why should I spend time reading about how some olive oil isn’t just made from olives? Even worse, RSS readers transform collecting reading material into an imperative task: 45 unread news items! What have I missed? On the other hand, stopping by newyorker.com every week or so can easily be skipped. But my will isn’t strong enough to avoid checking my news in RSS, and I dread avoiding them for a week and coming back with hundreds of items that just might be interesting enough for me sift through the whole lot for.

I’ve cut down a lot recently, you must understand. On a typical day, I’ll only have a few tens, max, of articles to read or links to follow. This takes me less than half an hour, I’d guess, to wade through and discard those that I don’t feel inclined to consume. I haven’t measured it, really.

And back to my first point: I’ve become out of practice in writing here myself, although I have been doing so more on my actual research. (The thesis is very far from complete, however. It’s early days yet. But don’t tell anyone!) The pity is that I really like writing. If I forced myself to write every day, I’d be a lot better at going on at length in an interesting way — and of course my ego thinks I’ve interesting things to say in the first place (although I’d be inclined to disagree on occasion). But my interests can be rather myopic at times, for others at least, and I’d rather not harp on about news that is transitory at best and of dubious interest at worst. (Hey, did you hear there’s new iMacs? They’re cheap and pretty and great! I will probably buy one in a couple of months!)

So here’s to my literary career. Ahem.

2007/05/06

"Kafka on the Shore" by Haruki Murakami (2005)

I was introduced to Murakami by a good friend a few years ago after their return from living in Japan, and Kafka on the Shore is the third of his books that I’ve read. As always, it took me the better part of the novel before I was absorbed by it, and finished the second half in short time. In contrast to A Wild Sheep Chase and Dance, Dance, Dance, which mainly followed a single character, the narrative of Kafka on the Shore is spread over several characters who are equally dominant in the themes of the novel. Broadly speaking, it’s a coming-of-age story as we travel with the characters on their respective journeys. But such a description doesn’t do it justice.

Murakami has captured that style of writing that I associate with J. D. Salinger and F. Scott Fitsgerald, who are credited as his influences. I might not have drawn the connection so strongly if I hadn’t read about that, though. I think it was a quote on the back of one of his novels that said something like that he creates poetry in writing about the mundane, and I’ve never read a better description of his writing. These aren’t the expositions of the intelligentsia who reflect abstractly on the meaning of their life, in the style of say Hermann Hesse (my favourite author ever); rather, Murakami’s work is composed of the small details of his characters’ stories. They don’t strive or battle, they just live, and it’s such a base connection that allows us to empathise and achieve enlightenment with them.

The other hook into Murakami’s work is the incredible surreal environments he places his stories in. From my previous paragraphs, it would be assumed that his novels are set in a reflection of the world we live in, and its familiarity provides us the context for being drawn into their environments. Well, that’s not entirely the case. Murakami’s reality is indeed a reflection of ours, but a wonderfully expanded version of the universe we live in. At the same time, the unremarkable way he presents his surreal worlds make them imminently believable. The gap between his world and ours is totally seamless.

There’s nothing really to say about Murakami that hasn’t been said before. Kafka on the Shore continues his tradition of stories with deceptively simply storylines as we follow his characters through a surreal version of his Japan. Profound experiences are had by all involved. Including the reader.

2007/04/08

As tight as Kubrick

Justin disagrees with my taste in movies sometimes. And he invites me, who knows nothing, to discuss the following:

Last night we talked about the scene in The Shining where Wendy witnesses two men having sex, one of them in a bear costume. This scene makes no sense in the film; it is impossible to understand without having read the setup in the novel. Do you really think the inclusion of this scene constitutes tight editing? Separately, do you think that the star gate sequence in 2001 constitutes tight editing?!

Perhaps my use of the word “tight” in my previous writing about my first viewing of Paths of Glory isn’t quite right. I would probably prefer the word “perfect”, but that’s a word with less meaning in the context of something that requires subjective measurement. So, to broadly answer the questions as stated, I’d have to say yes and yes, but I suppose I need some just[in]ification for that position.

Even more than really intelligent movies, my favourites are the ones that have a strongly visceral affect on me while watching them. It’s not so much about the details, but about the feel.

More than any other Stanley Kubrick movie, The Shining dramatically improved for me with repeated viewing. I’m not sure why. When I first saw it, I wasn’t totally blown away. Shelley Duval’s acting might have been a bit of an influence in that, and the exaggerated accompanying score. But I think the more you watch a movie, the more you absorb the feeling of it as you pay less attention to the details in front of you (like the dialogue or the acting) because you’ve seen it before and you know what’s going on.

This particular idea was made especially clear to me after watching Amèlie a few times; I ended up not even reading the subtitles any more, despite speaking no French, and just went along for the ride.

Now, I find it a little strange to call The Shining out for the incoherency of its scenes, considering the subject matter of the film. I haven’t read the novel and I’ve got no idea what the significance behind the bear suit men having sex is. (1:41.54 into the movie, by the way.) But why is it being “impossible to understand” such a bad thing? I don’t really understand most of the weird stuff going on in the movie — and it doesn’t affect the feeling the movie has. I’m gratified that there is thought behind the madness, because it means that Kubrick didn’t just make up something random and stick it in. There is an internal consistency that might not be visible to the casual viewer but which ties everything together. I guess I often call that the “texture” of a movie.

(A film that exemplifies the whole idea of scenes not making sense but the whole movie having a big impact (and with its own internal consistency that is hard to find) is Mulholland Drive.)

So, is it “tight” for The Shining to have this scene that makes no sense? Conversely, would you say that it isn’t a good choice to leave that scene in there? In actual fact, it’s only the ending few seconds of scene, which could easily have been cut out. And there’s a scene a couple of minutes later with a similarly meaningless hallucination. Without going too much into trying to put words into the mouths of Kubrick and Stephen King, I’d say that cinematically the random appearances work to increase the scare factor (and internally to the film, to increase Shelley’s character’s bewilderment), and thematically to show that it’s not just Jack that’s gone crazy; there’s something weird about the whole place. If you took out the bear man sex (and the other random guy that turns up a few minutes later), you’d be losing a particular point that the movie was making at that point, in my opinion.


2001 is the previous argument magnified. The whole movie is essentially only about the feeling. And the climax of the film is the star gate sequence.

Here’s a quote from John Gruber, which he said on Hivelogic Radio earlier this year (six minutes in). He verbalised, and made me realise for the first time why I like long and slow movies:

The whole problem talking about [2001] is that the point of it isn’t something you just say “oh here’s the point”; the movie itself is the point: it’s the way that it makes you feel. That’s the way when I was a little kid that I used to feel about all the movies I watched that were movies for adults, movies that weren’t really just kid movies. I’d watch it and be like “ah, I don’t really get this, I don’t understand it, it just gives me a feeling”. I think that’s how 2001 works, even for an adult, it’s more about how it makes you feel.

Personally, the star gate sequence is an example of a cinematic element that I love, but I know that many people can’t stand. Where long scenes contain only abstract imagery and the sudden absence of narrative sends my mind into a reflection of everything I’ve just seen. I find it a unique experience when a movie has been filling my head for a couple of hours, and then abruptly stops, leaving my mind to coast along in the direction it’s been pushed. After trying to encompass the whole feeling of the movie in a protracted instant, my mind then empties and there’s nothing left to fill the gap.

Less slow-minded people probably get over it in about five seconds and then get bored, so I can understand the lack of universal appeal. I also don’t know if that’s what the film maker is trying to do; I should get someone who actually knows about film production and editing to tell me one day. This is the reason, also, that I like to watch movies until the end of the credits roll, but it doesn’t always work at emulating the experience I described above. Sometimes it does, particularly when the music has been well chosen, and it’s just as good provided my companions don’t then immediately stand up and walk out of the cinema.

Could the star gate sequence be half as long? Probably. Twice as long? That’d be an awfully long time. Could it be only ten seconds long? Probably not. There’s no way of pinning down an exactly period of time that such a scene should extend for, and given the very slow pace of the movie as a whole (it’s only 1/3 dialogue, after all), I think it’s appropriate as Kubrick cut it.

So we started talking about tight editing and ended up with a vague and possibly pretentious discussion about how movies make me feel. Did I answer your questions, Justin?

2007/04/02

EMI rocks *and* rolls iTunes

Michael Gartenberg seems to have the scoop (is he allowed to, given his job?): “Apple and EMI have announced that they will be selling music without digital rights management”. But there’s more: “albums will be DRM free, have the higher sound quality but will remain at the same price point as current albums. Format is AAC and encoded at 256kbs”. That’s awesome.

[Update: seems I just missed the press releases and my weird Australian time zone means the press hasn’t caught up yet. Some tidbits from Apple’s announcement:

  • It’s not happening until May, but it’s worldwide.

  • There’ll be a “one click” upgrade button that’ll give you the enhanced tracks for the cost of the difference between the original price and the premium you have to pay for DRM-free singles. No word if this will update whole albums for no cost. Hope so.

  • Steve Jobs: “We think our customers are going to love this, and we expect to offer more than half of the songs on iTunes in DRM-free versions by the end of this year.”. Read: “You’d better buy the DRM-free tracks or we’re screwed. But if you do, a couple of the other labels will follow suit!”

  • EMI music videos will also be un-restricted. That’s great. I haven’t started buying music videos for a couple of reasons, but I love the idea of having my own playlists of them.]

To provide some context on the whole issue, Apple currently sells music from the four major record labels in the world, plus countless independents, through its iTunes store. The music they sell is relatively low quality, and comes with copy protection that precludes (a) sharing your iTunes-bought music library easily with your friends, and (b) playing your iTunes-bought music on any portable music player except the iPod. This copy protection scheme also significantly increases the chances that your music won’t play in fifty years time, if you still have copies of it.

Most other online music stores have similar restrictions, with a notable exception. The second largest online music store is emusic, which sells (better quality than iTunes) unprotected music from a large selection of labels excepting the big four (or thereabouts). I’ve been meaning to investigate these guys for a while, because I like buying things online due to instant gratification, and I’ve heard good things about their catalogue.

So I’ve been buying music from iTunes for a little while, not particularly fussed by the problems outlined above. The low quality was more of an issue for me, frankly, but that’s probably because I like Apple in general and don’t mind the iTunes/iPod lock-in. And I wasn’t aware of the quality issue in day-to-day use — the music sounds fine on my stereo and of course on my iPod — it was just that I knew in principle that if I hooked up a really nice system and looked for the difference, I’d be able to pick it.

I guess having lost one music collection so far on MiniDisc (way too tedious to transfer my music — recording the audio stream in real time onto my computer), I guess losing another set if my iTunes music became similarly too inconvenient doesn’t bother me too much. After all, you usually buy music before you know if it will be in you “top 10 of all time” list, and you never listen to everything in your collection with the same gusto. Well, in my case at least.

But that’s just me justifying it. Quality and longevity of iTunes-bought music has been the biggest problems with moving to the new media; why go backwards from CD, which works so well? So in one swoop, this pairing by Apple and EMI shows the rest of the music industry that this business model can work, and it should only be a matter of time before a lot more unprotected music turns up on iTunes. In fact, it should only be a matter of months before the independents follow suit (since they generally sell the same music, unprotected, on emusic anyway).

Very good news. Now, when are the other labels on board, when does iTunes become more (or truly) worldwide, and when does the same happen for video?

2007/04/01

John Searle, misguided philosopher?

It is unusual to run across a mention of John Searle while reading Scott Adams, because the only other place I’ve heard of him is in philosophical arguments dating back to the eighties.

In that context, he argued against the “Turing Test” for evaluating artificial intelligence. The test goes that if you can’t distinguish between a computer and a human in a text-only conversation, then the computer must be intelligent. The actually test, in my opinion, is meaningless because humans can trick the computer by escaping into the real world — which the computer can only compete with if it has a similar “life experience”. (And I believe it was never proposed as a formal test of intelligence, just as a thought experiment, so I’m not arguing against Turing himself.)

For example, adapting an example from Hofstadter, asking the question “How many syllables does an upsidedown M have?” requires that the computer knows about the shapes of letters and geometric properties like rotation, plus knowledge of the sounds of the names of letters. At this stage, the computer either needs this information given to it a priori by its inventor, a scheme which would never work in general for creating an “intelligent machine”, or actual understanding of such things. And the latter requires eyes and ears for interaction with the real world, at which point you’re looking more at a robot — and then consider problems of questions about the feeling of bungie jumping or eating too much Indian food. In essence, your computer needs to be able to fake knowing of such things, where you’re sure to eventually be able to trick it somehow, or build a replica human, which isn’t the point of the exercise — we want an intelligent computer, not an intelligent humanoid robot (although that would be cool too, obviously).

John Searle’s objection to the Turing Test lay along quite different philosophical grounds — that computers can’t think:

Suppose that, many years from now, we have constructed a computer that behaves as if it understands Chinese. In other words, the computer takes Chinese characters as input and, following a set of rules (as all computers can be described as doing), correlates them with other Chinese characters, which it presents as output. Suppose that this computer performs this task so convincingly that it easily passes the Turing test. In other words, it convinces a human Chinese speaker that the program is itself a human Chinese speaker. All the questions the human asks are responded to appropriately, such that the Chinese speaker is convinced that he or she is talking to another Chinese-speaking human. The conclusion proponents of strong AI would like to draw is that the computer understands Chinese, just as the person does.

Now, Searle asks us to suppose that he is sitting inside the computer. In other words, he is in a small room in which he receives Chinese characters, consults a rule book, and returns the Chinese characters that the rules dictate. Searle notes that he doesn’t, of course, understand a word of Chinese. Furthermore, he argues that his lack of understanding goes to show that computers don’t understand Chinese either, because they are in the same situation as he is. They are mindless manipulators of symbols, just as he is — and they don’t understand what they’re ‘saying’, just as he doesn’t.

(from Wikipedia, which seems a bit confused later on, stating

The Chinese Room argument is not directed at weak AI, nor does it purport to show that machines cannot think — Searle says that brains are machines, and brains think. It is directed at the view that formal computations on symbols can produce thought.

— but if there’s no contradiction there, it’s too lazy a Sunday for me.)

Obviously, to me, the “understanding and thinking” of the whole system must incorporate the actual rules that are being followed, rather than just the mindless object executing the rules. The fact that blindly following the rules allows the Chinese Room to pass the Turing Test (by assumption, no less) implies that the rules know a hell of a lot. My argument goes: either humans don’t understand anything, and neither can machines; or humans have understanding, and so can machines.

Anyway, I don’t like these sorts of arguments, because there’s so much terminology invented to argue about that the ideas behind them are a little obscured. That’s probably my layman’s excuse, though.


Back to where I started this whole thing. Scott references an interesting article:

[John Searle] is puzzled by why, if we have no free will, we have this peculiar conscious experience of decision-making. If, as neuroscience currently suggests, it is purely an illusion, then ‘evolution played a massive trick on us.’ But this ‘goes against everything we know about evolution. The processes of conscious rationality are such an important part of our lives, and above all such a biologically expensive part of our lives’ that it seems impossible they ”play no functional role at all in the life and survival of the organism”.

Scott then says:

Is it my imagination, or is that the worst argument ever? […] The illusion of free will helps make us happy. Otherwise, consciousness would feel like a prison. Happiness in turn improves the body’s immune response. What more do you need from evolution?

Well, I’m not sure if the link between happiness and immune response is direct and low-level enough to be used as a great argument in this case. Also, the argument might have been made worse after filtering through journalism-speak. But it does seem like a pretty poor argument. It is much more likely to me that the illusion of free will arose as a by-product of our ability to think ahead and think of ourselves in future situations — that particular skill that turned us from monkeys into bloggers. (Not that the difference is particularly noticeable in some cases.)

Not that we choose different paths of action to take based on what’s coming up, but we take what’s coming up into account in our actions. This means we had to have a symbol of “self” in our brains, which most easily is mapped to an “I”; and so we have consciousness. Since there are “choices” about what to do next, which involve our brain-symbol “I”, the illusion of free will arises because one of those choices is chosen. It’s just that we have no control over which choice to take, because that’s entirely determined by physics (in the “no free will” argument).

Clear as mud. Well, to John Searle.

2007/03/25

“Paths of Glory” by Stanley Kubrick (1957)

I haven’t seen as many movies as I would like. In fact, I own several movies on DVD that I just haven’t got around to watching yet. On the rare occasion that I find myself with a free evening and no objections (the movies I often want to watch aren’t universally appreciated, for some reason), sometimes I’ll sit down and actually watch one of the many many movies that I haven’t yet had the chance to.

One of the few Stanley Kubrick movies that I hadn’t yet seen, Paths of Glory sat in shrink-wrap for a few years before the opportunity presented itself tonight to be watched. And I should have watched it earlier, of course. Kubrick is known for the tightness of his movies (length and tightness needn’t be opposites), and this one is no exception. It’s interesting to reflect on these earlier pieces of his, where his style is still unmistakable but his infamous perfectionism isn’t quite as blatant.

This film is billed as “one of the greatest anti-war film ever”, and while that statement does sum it up quite neatly, there’s not enough context to describe what the film’s about. I guess the feeling of a war movie strongly echoes which war it’s covering. While more modern films such as Full Metal Jacket or Apocalypse Now begin in some sort of reality and descend into situational insanity, Paths of Glory takes a tiny look at war in the trenches and focuses on the futility of the whole situation. Two groups of people facing each other with guns and nowhere to go just can’t be resolved from the inside. But trying to escape the situation isn’t going to work either. Not insane. Just frustrating waste.

One of my favourite experiences is when the credits roll and all you can do is sit in silence thinking about the movie. Too often reality intrudes when I do this, but it’s really a moment to be savoured. It’s not any one part of the film, like great cinematography or a well written script. It’s when the movie evokes feelings that resonate past when the film ends, and you wish everyone could just share in that moment.

2007/03/14

I've never been so busy…and I'm rambling…

I’ve been so busy recently that my sleep regulator went unstable. But it’s my fault. I’ve got no excuses after a long weekend. I find it quite interesting the effects of tiredness has on my mental state, particularly the amplification of grumpiness. I find being grumpy fascinating, in that I acknowledge that I’m being completely unreasonable trying to blame everyone else for my troubles, but nonetheless try and justify it to myself anyway. And after blaming people around me for a while, I become depressed; it’s a well-worn spiral.

Luckily for me, my depression doesn’t last. So here I am, two hours later for uni already. MarsEdit has been updated so I can post things to Blogger again without having to go through their web interface (mostly why I’ve been quiet here recently, besides generally having no time). There’s a weird old Italian man in my shower, taking it to bits and putting it back together again, on account of an interminable drip you see. I can’t understand a word he says, but he’ll be there for a while.

I’ve got this experiment going on a uni that totally isn’t working at all. And it needed to be working a couple of weeks ago so I could write a paper on it. I’m kind of screwed, because the deadline is in two and a half weeks. Not a good time to be sitting on the couch at 11am in the morning in my dressing gown. Oh well, in the whole scheme of things it’s not that bad. After this paper is done, I’m going to start my thesis. And my personal theory is that you need practise for writing well, so it would be a bad idea for me to neglect this website any more than I have been doing.

To finish off, here’s some things I’ve learned recently that I found interesting. Pterodactyls weren’t dinosaurs; the word stands for any flying dinosaur. They ranged in size from 20cm to 20m wingspan. Twenty meters!? When we can build aircraft that can fly as efficiently as that, I’ll be happy. Ever heard those amateur model helicopters that are less than a meter long? Damn they’re loud. I want a mechanical pterodactyl.

So it turns out that acupuncture has some sort of scientific basis behind it, which I found very gratifying while having it done to me. (Terrible neck from too much ‘puter.) From what I can gather, “bad” areas along the spine come about from a runaway feedback loop involving the localised nerves and muscles in the area. The first twinge will result in muscles tensing up to protect the delicate nerves/spine/whatever beneath. This is generally beneficial when the spine is being normal. But when something in the spine is wrong, such as a vertebrate out of place, the tensing of the muscles can lead to further problems. So the nerves trigger a signal to protect themselves, which tenses the muscles, which triggers the nerves more, which makes the muscles pull tighter…you get the picture; in the end, the brain is essentially flooding the muscles with “hold tight” signals even when it would be better to relax. Acupuncture resets this somehow in a process that’s too complicated for me to understand. All to do with the impulse pain of the puncture overriding the slow response of the “twinge-tense” action of the muscle, as I understand it.

Anyway, interesting stuff. After the acupuncture, the mis-aligned spine still needs to be massaged into place. I wonder if resonating the spine (locally) with vibrations could make this easier (either for the practitioner or the patient) to fix. If you could excite the twisting mode between individual vertebrate, it should theoretically be relatively easy to excite them back into place. But maybe not — you might end up doing more damage than in the first place!

2007/02/11

Super (fridge) magnets

A couple of months back, Sven-S. Porst wrote about using neodymium magnets for a pinboard. Having something of a profession interest in the matter (my PhD is about a table that floats on magnets), I meant to chime in straight away about the same type of idea I'd had previously. Well, let's just say I've been distracted and/or busy over the last two months. While buying some magnets for some experiments a while back, I also bought one hundred 1/8" by 3/8" cylindrical magnets for the fridge: These are great because their length creates a magnetic field that extends relatively far from the end of the magnet (that is, they're strong even though they don't take up much surface area on whatever they're hold up), and they're also very easy to pick up from the fridge. If you've got especially large fingers, maybe the half inch ones'd be better, though. Now, these aren't your regular fridge magnets. I can stack thirteen of them end-to-end and they still stick to the fridge: I'm amazed how strong these rare earth magnets are, and how cheaply you can now buy them on the internet. Furthermore, you can buy huge ones, like these one inch spheres. Just be careful of any greater than about half and inch — at large sizes they hurt if your fingers get in the way, and they also are very brittle so their corners and edges will chip off very easily if they smash together. (Not to mention being hard to separate!) Speaking of smash, it looks like they've got even larger magnets now than they used to. A 4" by 2" by 1/2" block is just asking for trouble, if you can afford the $60 asking price (!). You could secure furniture with magnets that large. I'm happy enough with objects to the fridge for now, thanks anyway.

2007/01/15

iPhone summary of complaints

Well, there’s certainly a lot of discussion going on about the iPhone. The Apple one, that is. After the dizzying amazement of the keynote and its introduction, it’s now time for the grumblers to come out and complain about the thing.

Depending on which day you read the news, you’d either think that iPhone will be a fantastic success or terrible failure. I’m optimistic about liking it, of course, and the 2008 Australian release date gives me an imposed buffer against buying the first generation of the device.

In no particular order, here’s a summary of complaints people have had and my general justification for them. I haven’t made an exhaustive search for the various complaints out there, but I think I’ve covered the majority.

Slow mobile data speeds — i.e., no 3G — this seems most likely due to a conflict of interests with Cingular, whose 3G service touts non-Quicktime video and audio purchasing or streaming or something. What are the carriers doing trying to get into the actual content distribution game? They should stick to charging for bandwidth. In any case, Steve Jobs announced 3G plans himself in the keynote; expect 3G with the first iteration of the iPhone. This is especially likely as the iPhone migrates to Europe and Australasia, where 3G is apparently much more widespread than in the US.

Software keyboard — this is a non-issue, I think. This requires some explanation.

Finally, the dubious merits of sticking to the QWERTY keyboard layout over the years since the typewriter have actually paid off. Let me explain. The iPhone has predictive text, like a regular phone keypad. But how inefficient is a numeric keypad design, when there are many overlapping words for the same input? (home/good, fairy/daisy, golf/hold, among others more comical…) This happens because the statistical distribution of alphabet letters were not taken into account when assigning their positions on the keypad. E.g., a single key shares both S and R, two of the most common consonants in English.

By contrast, the QWERTY keyboard was designed to have adjacently used letters (statistically) at least two keys away from each other — because typewriter mechanisms would jam if two adjacent keys were pressed near simultaneously. (Note that comparisons with the Dvorak keyboard have shown that the QWERTY keyboard is no slower than any other design; it just makes your fingers move more, thus making its users more prone to RSI-like problems.)

Now consider the iPhone keyboard. Each press you make, unless you’ve tiny fingers, will likely cover a few letters inside some sort of blob shape. Spatial averaging will pull a single letter from this group. But not only does the iPhone have predictive text, to speed up entry (I hope it’ll have “pre-emptive” text, a term I coined for when you enter “unfort” and it auto-completes the “unate”), it also auto-corrects spelling mistakes. It should be able to do this very reliably because (a) it knows a subset of letters to look for replacing (i.e., around the letter it recognised with the press), and (b) words are statistically not very likely to have two adjacent letters in them. Words like “damn”, “through”, “poop”, and “qwerty” might be harder than most to spell correctly. Just slow down.

To round off the keyboard commentary, I wouldn’t be surprised to see some sort of convoluted 3rd party case that has a flap with little actual buttons to overlay the keyboard. In any case, I’m sure that the iPhone is significantly better than keypad predictive text typing; the advantages of having a software keyboard outweigh the downsides in my opinion at the moment.

That combination music player/phones are fundamentally flawed — That’s the worst argument I’ve ever heard for why the iPhone will fail (no link for the attribution). The argument is that phones and music players don’t mix; taking out earbuds to answer a call or swapping earbuds with a Bluetooth headset are unacceptable options, apparently. How do you reckon people currently answer their phone while listening to their iPod? This is actually a killer app for me — I frequently miss calls because I’m listening to music on the walk to work, and having the player fade out when a call comes in is one of the most practical things about this phone. (Not that that’s unique to the iPhone, of course.) It's not that music phones haven't been popular before now because there's anything wrong with the idea, it's because (a) they haven't been iPods (the ROKR doesn't count, either), and/or (b) because they've had crappy implementations.

No wireless syncing — Duh. You have to plug it in to charge, right? Until Apple starts using Splashpower or something, this is such a non-issue.

No modem capability — This feature would be obsolete in a year or two with ubiquitous EVDO and wifi anyway. The point of the iPhone is that it’s useful when you’re away from the computer. This is too much of a niche feature to worry about, considering the consumer market.

Non-replacable battery — Geez, I wish this “iPod crappy battery” meme would die a swift death. Are people saying they want to carry around a spare phone battery with them? Or even that they would? And this just isn’t a device that will live long past the lifetime of its battery (unlike, say, a digital camera these days). If it does, Apple will be more than happy (as with the iPod) to charge you for replacing the battery in their support centres. A closed case makes for a much cleaner design, both figuratively and literally. I’m more than happy with this so-called “downside”.

Poor battery life — I think this is too early to call. The thing isn’t even out there yet. And this is a first generation product. Having said that, I’m surprised to learn that the battery life of the (hard drive) iPod has only doubled over its 6 year lifetime. Although the modern IPods do sport a much bigger, brighter, and higher res screen. In any case, I have absolutely no problem with docking my phone every day for syncing purposes (in fact, I can’t wait have unified syncing/charging as the iPod).

Consider also what you get that partly causes this battery life. The three sensors that everyone loves; the 160 ppi display that text and video has never looked so good on (here’s baby steps towards resolution independence; especially see the zooming pinch that works in emails and web browsing as well as photo viewing). This is a screen the same physical size of the Zune’s, but with double the number of pixels.

And this thing is only 0.03 inches thicker than the current iPod. Motorola proved with the RAZR that pockets can fit objects that have significant width and height (like a wallet, surprisingly enough) but if you make it thin, it disappears. And the iPhone is thinner than the RAZR by more than a couple of millimetres! It could have a larger battery, but it’s not worth compromising the design.

And finally, most importantly, no 3rd party apps — Take a look at the main screen of the interface, with all the buttons. Down the bottom there are the “big four”: phone, email, Safari, iPod. (The icon for the iPod is going to need changing eventually: I predict iTunes-like music notes.) The top majority of the screen is taken up by other apps and widgets. Or are they all just widgets? There’s the rub. If the number of widgets were fixed, there’s no way that they’d be organised as a group of 11 in almost three rows with space for five more on the screen — without the intention of the possibility of adding more.

Apple has said no 3rd party applications, but widgets fall in a grey area. I’m predicting that when this thing’s released, or thereabouts, Dashcode will be able to create restricted widgets for it. (By “restricted” I’m saying no Cocoa.) Apps are a different story; Apple’s clearly tied that down to the four at the bottom of the screen. A very poor man’s Dock, if you will. But the Dashboard exists to be filled (currency conversion, anyone?) and it’s the addition of extra widgets that will provide the sufficient expandability of the device to be useful enough that the clamouring for additions is abated.

Perhaps, though, that this will only come in time with faster processors and more memory; who’s not to say that Apple’s totally maxed out the capacity of its resources? They might like nothing more than to let you add widgets, but maybe there’s literally only enough RAM for what they’ve already got on there. In any case, time will tell. The killer apps for this device are simplicity and interface; the built-in functionality really is enough for most people. I don’t think the lack of expandability, at this stage, is going to hurt sales one bit desipte anecdotal evidence to the contrary.


In closing, I think that people are misconceiving the iPhone because it is truly the first of its kind. There have been expensive smart phones before, but they have been marketed not as consumer products but as business tools. Work while you’re not at your desk. A very, very small minority of consumers have ponied up the cash and been excited by the prospects of developing Java apps for their phone. Consumers in general don’t want to spend money for their phone, and Apple is changing that. They already spend the money on an iPod. An iPod that makes phone calls and does sms-ing better than any phone they’ve used is a novel concept, but one that is a logical upsell on an iPod by itself.

Ten million units in the first year is a big number to be aiming for, but I think they’ll do it, and more. After all, 1% of the phone market is much less than than the Mac market share. The only stumbling block I can see is if Cingular tie the phone to some ridiculously expensive plan than simply won’t justify the consumer nature of the device.

2007/01/12

Wherefore art thou iPhone?

“Wherefore art thou …” is the most incorrectly quoted piece of Shakespeare I know. In this short piece, I wonder “why is the iPhone thus named?”.

A couple months back I wrote some thoughts about the rumoured iPhone. To be honest, after Cisco released their own iPhone product, I thought it a fair chance that the Apple iPhone was non-existant. Some of the rumours were sketchy enough that Chinese Whispers) would account for the confusion. Obviously not.

In my original post, I wrote that “it would be ludicrous to put aside the huge mindshare behind their most successful product and supplant the iPod with a superior device.” My prediction was that an Apple phone would be branded as an iPod. I was wrong, evidently.

I understand that Apple thinks that the iPhone is the next Big Thing. I don’t blame them. Considering the differences between the iPod and the iPhone — hardware, operating system, interface, design, the people working on the thing — I can understand why the product is viewed in isolation and introduced to the world the same way. It’s totally new, and deserves a totally new name.

But it’s not just a phone. Steve Jobs touted that fact during the keynote — (paraphrasing) “we’re introducing three new products today…a touchscreen iPod…a phone…and an internet communicator” (whatever he meant by that last one — perhaps it will end up having iChat installed as well).

Here’s my argument in a nutshell: the iPod has a wonderfully ambiguous product name, and it does more than one thing (now; i.e., play movies). The iPhone, by contrast, already is more than a phone, but its name does not reflect that. In five years time when we’re all carrying around iPhones but using them more for web access and music, won’t that be a little weird?

The fact that most people carry mobile phones and many people carry iPods should make it fairly evident that one day the functionality of the two devices will merge. I’ve heard of some very new mobile phones with hard drives that give a classical iPod a good run for its money. As it stands, the iPhone will now (very slowly) cannibalise iPod sales until the iPod brand no longer exists. By calling its new product the “iPhone” and *not the “iPod phone”, Apple has doomed its most popular product line ever. This echoes the demise of the Apple II after the Mac was introduced in the eighties, but it doesn’t have to be that way.

Has “iPod” become such a generic term that people view it exclusively as a music player and wouldn’t warm to the idea of also using it as a phone? Perhaps. On the other hand (to use a word that John Gruber popularised) the parlay from iPod to “iPod phone” is a piece of cake. And like I said, the iPhone is more than a phone, so why not call it an iPod?

My secret desire is the whole Cisco lawsuit thing will end up forcing Apple to change the name of their device to something like “iPod phone”. (Reports from the expo say there isn’t actually a brand name on the demo units.) I just makes more sense that way when you look at the device in its historical context. And many people will be using the thing more as an iPod anyway. But I can live with a bad name. It’s the actual product that counts, and, well, I think it speaks for itself.

2007/01/08

Why I want to buy an iMac

A few years ago, I bought a 12 inch PowerBook and said it’d last me until I finished my PhD. That is, I wouldn’t buy a new computer until I got a job. Well … I changed my mind. And over the last year or so, I realised I don’t want a laptop. As a Mac user, my choices are nicely limited: Mac Pro, Mac mini, or iMac. And it’s the latter that I am keen on. Here’s some of my thoughts revolving around this decision.

Why a new computer?

Firstly: why do I want a new computer now? Since I bought this 867MHz processor + 640 MB RAM machine 3.5 years ago, Apple has switched to Intel processors, and a comparable machine now is something like dual core 2GHz processor + 2GB RAM (and around $1000 cheaper). That’s approximately following Moore’s law with a fourfold increase (two doublings) in performance. And boy, does my computer feel pokey these days.

With Leopard coming along, I want to buy an iMac with the next product line refresh, in order to get the OS update “for free”. My notebook is simply insufficient these days and I’ll use two examples of why I think that: iPhoto is too slow and the lack of USB 2 makes transferring photos painful; and iTunes is compromised by not enough disk space, and exhibits poor performance — there’s nothing worse than iTunes crashing while streaming music during a party. I’ve also got an iPod shuffle lying around that I’d love to be able to use.

Which desktop?

Regarding the choice between those three computers, the Mac Pro is completely out of my league. The things I do at home don’t require the price premium for the best performance of the day. And yet I find the Mac mini distinctly underwhelming. Upgrading the mini to match features of an iMac yields a more expensive unit that is still inferior. I don’t want to go into the details too much, but for $2100 (iMac) vs. $2500 (mini) — rounding to the nearest $100 at education prices — the comparison just doesn’t match up, all else being equal with 1 GB RAM and a 20 inch display:

  • 2.16 GHz vs. 1.83 GHz Intel Core Duo;
  • 250 GB, 7.2 rev/s vs. 160 GB, 5.4 rev/s hard drive;
  • Embedded graphics vs. some sort of ATI graphics card.

I can see why Apple sells the Mac mini. If people are buying them, they’re making a killing. For $400 more, I can buy a Mini that’s slower, holds less, has worse graphics, and has no keyboard or mouse. Let’s be fair and shop around for a cheaper display and say the prices are equal. I’m still not seeing the attraction of the mini. It’s just not for me — or for anyone else (in my opinion) also buying a display.

So why not a notebook?

Several factors have led me to dismiss the idea of using a notebook as my primary computer. The first is data loss: Mac OS X 10.5 will have in-built backup (“Time Machine”, probably not a permanent link unfortunately), requiring an external hard drive for mirroring purposes. I hardly backup at all at the moment. I’m scared of Bad Things Happening. Unless they start selling notebooks with two hard drives, I’m not using one as my main machine.

Rigging up a notebook with an external hard drive ties it to a desk (and is tedious as hell for various reasons), which brings me to my next point.

Notebooks cannot be used ergonomically. Either the keyboard’s in the right spot and the screen’s too low, or the screen’s at eye level but the keyboard impossible to type on. This means bad backs and headaches for long stretches of work. If you don’t understand this point, you’re younger than 25.

Another point is noise. My laptop is louder than most because the fans are old and need cleaning or replacing. But the fact that they drone on hasn’t been mitigated in the Apple’s current lineup — the CPUs used run too fast and too hot to be used without fans (but it would be marketing suicide to downclock them). I was very impressed reading an iMac review at silentpcreview.com (linked to page four):

With a maximum power draw of 63W, the iMac certainly qualifies as a low power system. At idle, the system drew 46W, which will qualify for approval from EnergyStar if their current draft computer spec makes it to the planned 2007 release. Even better, the system falls back into a low power mode after being left alone for a few minutes, dropping the power even more to just 33W. By way of comparison, the lowest idle power consumption we’ve ever seen from a custom built system is 36W — and that doesn’t include an LCD monitor.

[…]

The energy efficiency of the iMac solves the mystery of how it is able to get away with so little cooling. At first glance, the numbers don’t look that impressive, but keep in mind that all of these numbers include the power required by the LCD screen. Stand-alone LCD monitors typically draw between 30~40W from the wall, so we were quite impressed when the entire system managed to draw this little power.

(The low noise from low power consumption is equally appealing to the side of me that is concerned about the environmental issues of running a computer 24/7.)

Their testing showed negligible noise increases even with hard drive seek and full CPU activity. Especially when iTunes is running the music in the living room, I don’t want my computer making creating white noise. That simply isn’t the case with notebooks these days. Correct me if I’m wrong as I’ve had essentially no experience with the MacBook line.

Finally, screen size. This is a big one for me. I’ve never really used a Mac with more than 768 by 1024 pixels. A 20 inch LCD just sounds like a dream.

As an aside, I really like the idea of some sort of future portable that syncs data with a home computer, is very small, and doesn’t do too much. I might manage to critique people’s desires for an “ultra-portable” Mac notebook for Macworld before the event, but this post is taking long enough already.

So, in summary: with an iMac similar in price to a MacBook, the advantages of good ergonomics, easier data protection and a big screen easily outweigh the portability advantage of a notebook for me. I’m hoping for an iMac refresh sooner rather than later (moving to quad core would be better than I can dream) so I can justify buying one as soon as possible.

2007/01/07

The Prestige, by Christopher Nolan

A few years ago, I had a folder containing two things: in one side descriptions and instructions on magic tricks, generally sleight of hand card tricks; in the other, printouts of articles written about and more often authored by Nicola Tesla. They were my two biggest obsessions of the early 2000s, I’d say.

Imagine my excitement to hear of a movie that literally revolved around these two concepts! Unfortunately, it was then approaching five years later, and my obsessions had moved on. I would like to be able to say that I am always a little ahead of my time, but modestly and, moreover, common sense prevents me.

In any case, Christopher Nolan is my favourite “new director”, and his previous movies that I’ve seen (Memento, Insomnia, Batman Begins) I’ve found, respectively: amazing; interesting; and, the perfect superhero movie. As it happens, 2006 was no good for me actually getting the time to see or read about movies; that particular obsession has been replaced, and by the time I got around to seeing The Prestige, I had actually forgotten who made it. Foolish me.

Similar to Memento in that a second viewing will reveal a wealth of information missed in the first sitting, the story of The Prestige (based on a novel) remains simple despite layers of narrative that could have overwhelmed it. As a character study, it fits my idea of a perfect story of revenge by showing that opposites can be very similar indeed. The less said about the story, though, the better; a general rule that film reviewers should more often abide by.

Christian Bale and Michael Caine (of course) both act wonderfully, and Scarlett Johanssen (of course) provides relief for sore eyes. Hugh Jackman, I’m afraid, was competent but didn’t develop enough of this own character in the role. He’s often just “Hugh Jackman” to me in many of the movies I see him in; frankly, he should stick to theatre and musical work, where I hear he excels.

The highlight of the visual design for me was the backstage peek at the intricate mechanical designs of the magic tricks. There’s a certain nostalgia I have (and I don’t know where it can from) for purely mechanical design; before relays, electronics, or plastic, I love the idea of hand-crafted gadgets with springs, levers, gears, and linkages with shiny brass finishes. Elegant mechanical design embodied by delicate and robust construction.

The film is full of details that I missed the first time. I can’t wait to see it again.

2007/01/04

The Mountain Goats, Fowlers Live

I’m embarrassed in a way to admit that I only recently listened to the Mountain Goats, because they’ve been around since their albums were released on cassette. It appears our artists are the crazy type that bring out records every year or two. Anyway, after hearing they were coming to Adelaide again (I’d missed them previously), I bought their most recent album “Get Lonely” a few weeks back and have been listening to it since. I guess it’s my type of music. Feel free to read about them on allmusic.com; it seems a fair introduction.

I would say that their live performance didn’t surprise me very much. But it did impress me a lot. This is the first acoustic performance I can remember to have no drums, just guitar and bass. The write-up above calls them “militantly lo-fi”, a term that suits. Even more so than on their recording, their music is stripped back but doesn’t feel empty. Quite the contrary; the energy put in by the lead and his bass player easily kept the gig trundling along nicely.

A highlight for me was the incredible lengths the singer went to introduce his songs. Eloquent, verbose, and witty, you could easily see the clever lyrics of the songs echoed by the composer.

While it’s common to see the lead rocking out to his own music, I was really impressed by the bass player, who played like a king, drunk Jameson’s from the bottle, and had an awesome time doing it.

The song lineup was predominantly from “Get Lonely”, but featured older songs frequently. I know this because I didn’t know them. I had the misfortune to stand in front of an avid fan who sung along to most of the songs… luckily he was generally in tune and in pitch, and drowned out by the actual performers I went to see.

While I can’t say that I’m going to thoroughly investigate the Mountain Goat’s prodigious back catalogue, I would like to check out a couple of their more well respected works. You can’t go through 16 years with almost as many albums without some cracking work. A good band for me to keep an eye on in the future.

2006/12/24

2006 Chocolate Olympics

Shout out to Lozza, who's better organised than I. This year's Chocolate Bean Christmas staff party was brief but greatly indulgent; who knew you could get drunk from chocolate? The following day, I had a chocolate hangover. I swear. Not fun at all. I came equal second in a particularly disgusting set of events. Here're a couple of photos that can't begin to describe the feeling you get from eating molten chocolate and feeling it solidify in your oesophagus. Dunking for strawberries; so delicious... ...but so messy. I don't think I'm ever going to clean my shirt. Here're my lovely chocolate compatriots: And Lauren's success with big biting. I'm still munching on that block, days and days later. Yum. Happy holidays; mine so far have sure been great.

Pan's Labyrinth by Guillermo del Toro

Last night I had the opportunity to see the latest film by interesting Spanish director Guillermo del Toro, whose previous work include The Devil’s Backbone, which I’ve seen and loved, and Hellboy, which I wanted to see but heard wasn’t the best movie ever (my standards have been high, recently; to be honest, I’ve hardly gone to the cinema much at all this year).

What I found interesting about this movie (I’m not so much into describing what happens — that’s for you to find out when you watch it) was the juxtaposition between fantasy and violence; think The Nightmare Before Christmas meets Saving Private Ryan (another movie I haven’t seen, sigh). This is an odd combination, and one that I would think appealing to a rather limited audience; nonetheless, it works very well for me.

On a less positive note, I did feel that the fairy tale aspects of the story were rather shallow and, worse, underdeveloped. When viewed from the perspective of the central character, however, this flaw can be partially justified. There was certainly no lack of imagination involved when putting together the violent imagery, or time spent showing it on camera; here we see where del Toro’s real forte lies (some of the violence is indeed deliciously gruesome). Greater balance between the two halves, particularly because the fantasy is where the main story lies and the violent reality merely just context for some of the characters, would have improved the impact of the film. In my opinion. If you love violence and aren’t so keen on fairy tales, you won’t have such a problem.

I was going to be critical of the acting of the fantasy character, the faun, in the film, but I read that the actor was American and couldn’t speak a word of Spanish. I really wonder at the rationale behind such a decision — if even an Australian watching the movie can be irked by his performance, surely it couldn’t have been that hard in comparison to hire a Spanish-fluent actor? (On the other hand, Ron Perlman makes a fine counter-argument.)

To a more positive note, the characters and acting for the rest of the movie I found particularly good. The main character Ofelia (played by 13 year old Ivana Baquero) is like a Spanish Miette, perfect to a tee. Even the villain of the piece, who is so easy to stereotype in a film like this, comes across with intelligence and flaws to counterpoint his evil.

Finally, it would be omitting one of the highlights of the film not explicitly praise its artistic vision. The reality in the film moves from dream-like beauty to horror, and the fantastical scenes extend from and mirror this. While I believe the interpretation of the fairy tale didn’t fit exactly with the balance given to the fantasy in the film, overall it is a wonderful vision.

2006/11/30

Motorola's e-paper phone

Via macsurfer, I read at the Register that Motorola is shipping a 9mm thick (!) phone with an electronic paper display. I had no idea that e-paper was ready for this kind of thing — great news for the future.

I guess this is how new displays are going to sneak up on us. First it was OLED mobile displays, and now e-paper. It’s only a matter of time before they’re enlarged and LCD becomes obsolete.

I think this really shows how well Motorola have turned around in embracing radical new ideas and technology. And that they have a really good design sense at the moment — steering clear of the bulky over-featured phones offered by the other major manufacturers. This is similar to how I’d imagine an Apple phone, actually. Bravo.

2006/11/27

Yann Tiersen at Carrick Hill

Well, I had a pretty special weekend. A long while back, I discovered by chance that Yann Tiersen was going to be touring Australia, and that he was even coming to Adelaide. Then I discovered he was playing two days at the French Festival, and tickets were only $20 a day. The marketing wasn’t great — imagine if I had missed it? I was totally there, along with the man with whom I became a Yann Tiersen fan:

You could call us “Yann buddies”, I suppose. For those not in the know, Yann Tiersen is widely known outside of France for his soundtrack for the movie Amélie, which he somewhat lifted from his other albums. It’s a lovely piece of work, and the movie wouldn’t have been the same without it.

This concert was the best I’ve been to, because what I got was so unexpected. I had heard a rumour that he wouldn’t be playing live at all — it was a cruel trick and he’d be playing via satellite or some such (I mean, $20?! That’s like 10 Euro and a bit). And then others were less than keen because he was playing guitars, and his earlier albums were very much not rock; more classically inspired, I suppose.

So the weekend crept up on me over the months (having bought my ticket as soon as I could), and suddenly I was calling up Chris asking when he’d come by so we could get there. My denial of cars does make transport rather dependent on others at time.

And with no expectations at all, after a day of light drinking, heavy eating, and tremendous heat, there appeared an incredible band with some sort of progressive rock sound, but often to the tunes I knew much better as piano-accompanied. Not that the classical elements were all gone:

but damn did that band know how to rock out. Yann switched instruments between multiple guitars, violin, accordion, and mini-pianos (?) almost just because he could,

backed by a fat bass, cello (and other strings, albeit played like a cello), and second guitar, whose methods ranged between violin bow and electric drill to produce sound from the thing. It meshed so perfectly with the metamorphasised songs I knew and the newer songs I didn’t; I never imagined it, and it surpassed anything had I tried.

His skills on the violin and accordion really shone out. When playing those instruments, his whole physique would change, his face would soften, and his hands move faster than I could keep track — how can such music be possible?

As the first day was sublime, I made a trip of it on Sunday to repeat the whole event. And the second day in a row did not disappoint.

The only other thing I have to say is that his earlier music doesn’t make its mark on his demeanour; maybe I can’t see through the French exterior, but he looks far more haunted than I had imagined from his earlier music. The music direction I heard yesterday, frenzied and powerful, suits his face a lot more. But why should it be a beautiful man who makes beautiful music?

I regret terribly I didn’t approach and talk to him briefly, but what would I have said? In another lifetime…

2006/11/22

Resolution independence: the problem with bitmaps

I can’t keep up with the discussion on resolution independence that started a little while back and to which I added my thoughts the other day.

The IconFactory rebutted the claims about their vector image being composed of bitmaps — how else do we expect such effects to appear in icons? And fair enough. The field of raster image processing is far more advanced than vector; you need vector support for things like gaussian blurs in an image format before they can be included in a vector image. None of this is impossible (see Windows Vista) but it does require work.

Many people have been saying that high resolution bitmaps are all we need. And I challenge that claim. Again, I’ll reference Ian Griffiths; this time, see this article discussing Apple’s then-new 30 inch LCD display. In particular, note the visual artifacts shown in the “Click me!” button he shows; that’s the crux of the argument.

In a nutshell: bitmaps are fine, provided they exist by themselves. Icons, for example, possibly are best represented in some ridiculously large bitmap with a million pixels or so. All it’s going to do is sit on screen somewhere and presumably change size at various points. Nothing to worry about; decimation with low-pass filters (or whatever “smart” re-sizing algorithm you wish) works fine to rescale the image nicely (see Mac OS X’s Dock).

However, interface elements that have to align with each other must not exhibit scaling artifacts that are inevitable when dealing with raster images. “Resolution independence” implies that the sizes of interface objects can be specified in real world units (points, millimetres, inches, …) and render at the correct size. A bitmap has no provision to scale its internal features accordingly. What do I mean by this?

Say you’ve got the edge of a scroll bar rendered with blurs, transparency, whatever effects you like, and this has to mesh with part of the window that cannot be drawn as a bitmap of the same size. When these bitmaps are scaled, no matter how high their resolution, when it comes time to remove pixels from the image, rounding issues are going to affect which parts of the image stay and which are removed. Similar to anti-aliasing, a sharp hairline may end up a fuzzy mess if it doesn’t snap exactly to an integer number of display pixels. Bitmaps of different sizes will rescale and render such features in a non-controllable way. So you’ll end up with lines that change width when they shouldn’t or have off-by-one errors between adjacent interface elements.

And even with high-res displays, it’s surprising how much a “one pixel off” error stands out. This problem can only be solved satisfactorily (once and for all) if interface elements that must co-exist are all depicted as vectors and are sampled to the display resolution all as one.

To summarise: icons are fine as bitmaps. But there’s more to a graphical user interface than icons.

2006/11/19

iPod vs. iPhone vs. everyone else

At this stage, I’d wager that the iPhone will not exist. But before I go into that I’d like to provide some link love for people that write better than I. These things only tend to happen when I happen to read similar ideas in dissimilar articles, especially since my del.icio.us client broke. It’s just a coincidence thing, but the (localised) convergence of ideas makes me want to write.

Everyone’s favourite chocolate-inspired developer, Scott Stevenson, writes about user interfaces, primarily concerned with the (perhaps) storm in a teacup debate about whether bells and whistles are really necessary. And provides the first reference I’ve seen in years for the (classic) Mac OS Oscar the Grouch trash animation. “Oh I love trash!”. Anyway, let’s see if I can extract the idea he talks about:

Up until the last few years, a Mac app with a nonstandard user interface usually came about because the programmer didn’t know much about the Mac. They didn’t see any particular problem with using a push button as toggle switch. […] The other major difference is that this new interface concepts are designed by people that specialize in it. […] This is in stark contrast to Unix developers in the past who would basically make educated guesses about user interface.

Hold that thought for later — arbitrary engineers don’t know interfaces. In an unrelated article on Apple’s possible “iPhone”, jesper from sweden writes about how phone software (that is, the user interface of mobile phones) is, give or take, atrocious:

I have a Sony Ericsson model that can nail a note to the standby menu screen, and the Nokia I used to have slapped the Sony Ericsson around the block when it came to the address book.

I just checked, and it takes greater than six button presses, after the message is written, to send a message on my Sony Ericsson phone. Back when Nokia became popular, it was because they managed to design their entire interface around two buttons (the big button and cancel) and two navigation buttons, up and down — and it was the simplest, easiest phone on the market to use. The self imposed hardware restrictions forced them to design a good user interface.

Somewhere in there, the whole idea of simplicity was totally lost, and within a couple of generations Nokia phones were no better than all the others, with up to five or six or seven buttons to do various things and then four navigation buttons. Is it any wonder that people want “just a phone” these days?

There are so many preconceived notions I have about mobile phones, from how ring tones are annoying, to how predictive text could be so much better — give me a damn complete dictionary and make it pre-emptive and that’s a good start. To how the whole communication model just kind of happened; no-one thought through the popularity of asynchronous conversations via text messages. Could you do the same thing with voice? (Google just implemented something similar with Google Talk.)

What I’m trying to say is that there is a mold out from which Apple could very much break. Apple could design a phone without a numeric keypad. Think about it. When do you actually have to type numbers into a phone these days? Is it often enough to dominate the hardware interface?

Finally, Brian Tiemann writes about how the iPhone might not be a phone, but just an iPod. And he nails it. Why would Apple release a separate product “the iPhone”? John Gruber wrote earlier this year “Apple’s only serious competition [to the iPod] to date has been itself.”

It would be ludicrous to put aside the huge mindshare behind their most successful product and supplant the iPod with a superior device. “Are you getting an iPod?” — “Nah, the iPhone is so much better”. Apple trademarks names it doesn’t use just so other people can’t. Just as they did with the “iPod photo” (!) and the “iPod video”, Apple will release the “iPod phone” or “iPod talk” whose functionality will soon enough become ubiquitous enough that the suffix is dropped. (Let’s face it; the rumours are solid enough that something’s going on.)

Apple doesn’t seem to think its customers will be confused by having multiple products, over time, with the same name. There’s no need for a new name. It’s not the iPod X34, replaced by the iPod GH87, with its baby brother iPod LMP331. They’re just iPods, and people, unsurprisingly, seem to prefer the simpler title (if they even notice the dichotomy in nomenclature).

In summary: Apple’s gonna make a phone. But it’s going to be an iPod. And luckily, this time round, they didn’t paint themselves into a corner with an overly restrictive name for their product.

And that’s a selection of thoughts in my head from earlier this afternoon.

2006/11/16

Debunking an anti-vector art argument

There’s been some recent discussion on Mac OS X’s upcoming resolution independence. I’ve been interested for a while in this topic, but never managed to write about it much. Eighteen months ago Ian Griffiths discussed resolution independence in relation to Mac OS X’s upcoming support and what was already possible in Windows Vista. There’s a couple of good examples in there, for general interest.

A couple of people have chimed in on how you don’t want vector art, mostly because when you shrink it down you don’t get results as high quality as a hand-tweaked bitmap. There are two things here, exactly analogous to font technology. Long story short: you don’t want to shrink down a complex image too much, because you’ll lose detail and the smaller objects in the image will just disappear. Much better to design images for their display size; for example, thickening hairlines as the size decreases.

Over at the Iconfactory, they cover such points with another: vector art takes up more disk space when it’s complex. This is fairly untrue. They use the example of the same image in “vector” PDF and in bitmap PNG, with the higher-quality PDF a whopping 30 times bigger. And opening these files in an image viewer shows that displaying the PDF is far more processor intensive.

Seemingly damning evidence. However, zoom in real close to this image and you’ll see the reason this so-called vector image is so large: you can see by the individual squares of flat colour that it doesn’t use real vector gradients!

You can imagine that if this image is actually storing individual squares of colour at the size shown above (2000% magnification), then the file certainly is going to be enormous!

So what’s the deal? What has happened in this image is that their drawing program has rasterised the gradient into the vector image, resulting in an extremely high resolution texture in the image — resulting in the huge file size and slow processing seen. So in actuality, the image is not a true vector image.

To be a true vector image, the gradients themselves would have to be represented as vectors as well. Vector gradients cannot be as complex as the generated gradients as used in their image, but they are actually vectors. The display technology used to render the gradient would compute only the pixels being actually displayed in their subtly changing colours.

I’d like to conclude by quoting ssp, who has sensible things (as usual) to say:

I do think that using vector graphics in the process of icon design may be a good idea. Working with vector graphics seems to make people think more in terms of large structures than in terms of small details. And thus, for simplicity’s and clarity’s sake, basing an icon design on vector graphics may be a good thing to do.

Apple takes to the sky; airlines ‘amazed’

Recent news out from Apple announce iPod integration in airlines. This would be pretty nice, although something tells me it will be a first-class feature only; sad news for poor flyers.

In a perplexing turn of events, however, it turns out that the announcement may have been a little premature. A Dutch friend informs me that two of the airlines, Holland’s KLM and Air France, haven’t signed up to any agreement and actually are rather perturbed about the whole incident. Dutch article here, or a Babelfish snippet:

KLM reacted however astonished to the communication of Apple. “it is correct that exploring conversations have been but the chance that it does not continue is now much larger than that it continues, however,” a spokesman of the Dutch airline company said.

(Ah, automated translation. I wonder if Google’s service works better; it’s not available, yet, to translate from Dutch.)

This is the kind of thing you don’t expect to come out of Apple, given their notorious reactions to companies that do exactly this kind of thing to them. Considering they seem to all be airlines outside of America, perhaps there were wires crossed somewhere over the ocean. Without forthcoming information, and you can guarantee there won’t be, all we can do is watch on in amusement and puzzlement.

2006/11/05

Umberto Eco on religion

Stumbled across wise man Umberto Eco’s website where he has an artice About God and Dan Brown that closes with:

I think I agree with Joyce’s lapsed Catholic hero in A Portrait of the Artist as a Young Man: “What kind of liberation would that be to forsake an absurdity which is logical and coherent and to embrace one which is illogical and incoherent?” The religious celebration of Christmas is at least a clear and coherent absurdity. The commercial celebration is not even that.

Good stuff. To add some context, earlier in the piece:

Human beings are religious animals. It is psychologically very hard to go through life without the justification, and the hope, provided by religion.

I wonder if this will always be true. Science as a religion is still new (the philosophers seem to be ahead of everyone else, as usual) and there are millennia of religious devotion to overcome. I’m keen on the idea of a religion of society (clearly, we can’t all become hermits so we need to organise ourselves around something); would it be possible to craft that well enough to fulfil the hopes of the everyman?

Wine makes you strong

I like wine. You could say it’s somewhat of genetic trait; both my father and my grandfather have a “healthy” fondness for the stuff. I can’t speak for them, but I tend to find that I also can’t stop drinking it, especially when it’s half decent. I really cannot understand how people stop after a single glass.

Digressing for a paragraph; I’ve heard people say that in blind wine tastings of red and white at equal temperature, people only have a 50% success rate in guessing the “colour”. I don’t believe it in general, although I’m sure some reds and some whites do taste similar. There’s no way a shiraz and a riesling (again, half-decent) have anywhere near the same taste. But I do believe that most of the wine tasting experience is psychological.

Before today, I thought it was but a single glass of wine per day that was supposed to be good for you. But now I learn from the dubiously impartial red-wine-and-health.com:

The key to reaping the health benefits of red wine seems to be moderate consumption […] In the US, drinking in moderation means one glass for women, and one to two glasses for men.

Well, two glasses is better than one. But the good stuff follows…

The “sensible limits” in the UK and EU are two to three glasses of red wine per day for women and three to four glasses for men.

Huzzah! And we all know who live better out of the Europeans and the Americans. Presumably, the Europeans drink their four glasses over the course of the day, not all at once. Also, I guess that’s not four glasses in one of these 900mL beauties, either. That’s one dem fine wine glass, yes sir.

Not that you’d drink that much from one of those anyway, but I’m guessing they do mean four small glasses.

If I drank about 2 bottles of wine per week, that’d be around 10 cases per year. Assuming I’m drinking half-decent wine, that’s about $1000–$1500 per year for scientifically tested (and delicious) health benefits. I wonder…