2007/12/31

‘The Beginner's Guide to Winning the Nobel Prize’ by Peter Doherty

To take a break from some rather involving fiction novels, I picked up ‘The Beginner’s Guide to Winning the Nobel Prize’ where it had been laying dormant for a year or so on my bookshelf. Not exactly the kind of book that I’d buy for myself, but no an unwelcome present, either.

Peter Doherty won a Nobel Prize for some work in immunology, and this book is a pastiche including his recollections of the prize, a summary of his work itself, general thoughts about science in the future, the conflict between religion and science, and some general tips on how to win a Nobel Prize yourself. I’ve probably missed some in there. The book is interesting and thoughtful, but not too insightful.

On of his comments that resonated with me was in the conclusion of his section of religion vs. science:

What greater betrayal can there be of God’s good grace, or the continuity of our species and all life, than to embrace polarised attitudes of mind and practices that compromise the lives and opportunities of the generations that are to come?

It’s a statement that I have a hard time believing that anyone, religious or otherwise, could disagree with, and it’s a tidy summation of a morality for everyone.

‘Shantaram’ by Gregory David Roberts

I haven’t read such a page-turner as Shantaram in a long time. I read the whole thing in about 2 or 3 weeks (including a day out of good reading time for Blink). Considering it’s 900-odd pages long, that’s pretty good for me; admittedly, it’s been a slow end to the year.

Shantaram is a fictional story heavily inspired by a hugely pivotal part of the author’s life. To summarise the gist of the story, Gregory David Roberts escaped from a Melbourne jail while serving a twenty year sentence and ended up India, where, alone and unknown, his past life was slowly stripped from him and he began a journey towards another life. There is a sometimes disconcerting contrast between the voice of the author and the actions of the character of the book. It can be a bit of a shock to read about gouging people’s eyes out after the author’s personal reflections on love and loneliness.

The fictionalisation of the story is part of what makes this book such an interesting read, but it’s the personal side that brings home the more philosophical moments. In broad brush strokes, this is obviously a novel that paints the picture of Roberts’ life at the time. His true story is amazing, and the life-changing effects on him are unmistakable (and indeed, emphasised in the book). Having worked with with Bombay mafia, however, he’s obviously writing fiction for the general detail of the story. Suspension of disbelief here gives the novel its immediate appeal, I think. Obviously the story itself is integral to the book. Without the personal side to buoy the narrative, however, the plot would probably be a little too neat and tidy, and yet in the end of it all the plot ends abruptly and finishes nowhere.

This is a book that, in softcover, has pages thin enough to make casual page turning harder than usual. From the author’s point of view, splitting the book in two probably makes no sense at all, because it’s the spread of experience that he’s working from to write the story and finishing it earlier would leave his emotional development unfinished.

Now, I can’t say that Roberts’ writing is perfect; I found he was occasionally over-enthusiastically profound; for example,

The truth is that, no matter what kind of game you find yourself in, no matter how good or bad the luck, you can change your life completely with a single thought or a single act of love.

But I forgive him due to his sincerity. After his experiences he’s allowed the exuberance.

2007/12/19

‘Blink’ by Malcolm Gladwell

I bought a book today that’s a gift for my cousin. But I read it first because, well, I had to ensure that the Christmas present was a good one, right? This is the second time that I’ve read a book cover-to-cover in a single day and there really is something to be said for it. Edgar Allen Poe discussed the point once when talking about his short stories: everything that needs to be said is able to be digested as a whole. (Obviously he used more words that I.)

Don’t get me wrong; for many novels it’s an absurd idea to sit down and read until you’re done. But, for me, it worked for Perfume and today it worked for Malcolm Gladwell’s nonfiction Blink.

Malcolm Gladwell is a writer, and a journalist in the best sense of the word. His work I’ve read in The New Yorker has been well-researched and entertaining without exception, although I’ve only read a handful of his articles so far. I’m somewhat dismayed to just have discovered an extensive archive that I fear may take up a lot of my time in the near future.

He is also an excellent speaker. At TED he talked about pasta sauce and the way choice and like isn’t as clear-cut as it seems. And at the New Yorker 2012 conference he talked about genius and the difference between the geniuses of today geniuses and those yesteryear. Both talks give a good insight into his intelligence, his wealth of knowledge, and his style of collecting and reporting information.

This is his second book. I haven’t read his first, The Tipping Point, but I will one day. (In fact, I’ll do a lot more reading in general, if my insatiable appetite for sleep ever slackens and my indelible desire to procrastinate dissipates.) Blink discusses, from a dizzying number of viewpoints, the ways in which our brains work in the seconds before conscious processing kicks in. “Blink, and you’ll miss it”. I won’t try and replicate his examples or spoil the more surprising results; suffice it to say that when an expert tells you their opinion on something after seemingly a split second’s though, it’s worth trusting. On the other hand, to overcome our own gut reactions to things that we judge too quickly takes a lot of training — and in many cases is impossible. Our state of mind can influence our perception — no surprises there, I guess — but to such a degree that we should never take our own opinion of things too seriously.

Blink is an engrossing read by a writer who deserves his fame. Gladwell’s compilation of a slew of seemingly unrelated stories creates a compelling spiderweb of evidence to convince me, at least, that there’s a hell of a lot more going on in my brain than I give it credit for on a day-to-day basis. The most sobering part: think too much about something and you’ll destroy your opinion of it. Hmmm. I guess what I said above should now be reconsidered!

2007/12/07

Medicine

I’ve never been attracted by medicine as a science or profession. But I’ve become interested in the field recently, in a vague sort of way, because, well, I know I’m going to get sick one day. Everyone dies, right?

Disregarding outlandish (but hopeful and tempting) theories involving nanobots to replace our organs (Ray Kurzweil), and even an “anti-aging singularity”, after which the rate at which we can prolong people’s lives exceeds their actual deterioration due to ageing (Aubrey de Grey), there’s still huge amounts of progress still to be made in the field. This point is made so very clearly in an article in the New Yorker, “The Checklist”, discussing the huge improvements that can be made in intensive care simply by following checklists when performing tasks, rather than relying on memory and experience:

In the Keystone Initiative’s first eighteen months, the hospitals saved an estimated hundred and seventy-five million dollars in costs and more than fifteen hundred lives. The successes have been sustained for almost four years—all because of a stupid little checklist.

Boggles the mind, really. It’s sweating the details like these that will be keeping us alive longer on average. For something even more amazing, again via the New Yorker, check out this speech on Regenerative Medicine. It’s now possible to grow bladders from scratch (from a sample) and implant them in the patient whose original requires replacing; kidneys are almost there as well (the bladder is the easiest because it’s hollow). This had just reached the implementation stage now. And this is the stuff that can be done without stem cells. Fifty years ago, we couldn’t even transplant organs.

I don’t have a clue how these people do it. And I’ll no doubt never learn. But I can’t wait to live to see where we end up.

2007/12/03

Tolkien in a chocolate review?!?

Blogs. I hate the word. But I do love the medium. Seriously, think for a sec: where else can you read something anywhere nearly comparable? Don’t get me started on newspapers. (Well, in Australia they’re mostly tabloids in disguise, anyway.) Magazines work as is in they’re interesting and stuff, but you’ll never read the raw, unadulterated opinion of some guy just smashing away at his keyboard (or some girl, well, tinkering at hers).

Take Brian Tiemann, who’s the subject of this piece. Don’t ask me who he is. I don’t even know why I read what he writes on a regular basis. He’s entertaining almost all of the time. I guess that’s about it. He often writes about things I like. For example, he’s just started reviewing some varieties of dark chocolate. (Which is truly my favourite.) And here’s a little piece of what he has to say about a particular brand:

The chocolate doesn’t really melt, it sort of collapses like a Jenga tower into a heap of rubble on the tongue, which you then have to sweep out of the way like the ruins of a decrepit Vegas casino redolent of pipe smoke and loveless sex.

You don’t get self-indulgent, brilliant, evocations like that in a serious publication. Followed but one sentence later with:

Just a mouthful of wreckage that you’re eventually glad is gone, and a cloud of something gray and gassy, indistinct and vaguely sinister, floating over the whole scene, looking towards the West, only to be dissipated by a firm breeze from over the Sea

Imagine the brilliance of using a Tolkien metaphor to describe the aftertaste of poor chocolate. Would that ever work in a piece written for, you know, money? You’d have editors going “oh, no-one will follow that; it’s too many words, anyway”. And while they’d be right, they’d be depriving me of a moment’s joy at the end of a tedious day.

2007/12/01

“Theatre” by W. Somerset Maugham

A couple of years ago my father mentioned “The Razor’s Edge” by W. Somerset Maugham as one of his favourite books. His recommendation was, of course, good — I’m quite taken by that book. Well, since I liked that one so, time to get another. I picked up a bunch of Maugham books in hardcover at a secondhand bookstore and just finished the first of those last night.

“Theatre” is an odd book. Most of the way through, to be honest, I wasn’t particularly enamoured by it. It lacked the style and gravitas of Razor’s Edge that I so enjoyed; indeed, three paragraphs into the novel comes the phrase:

With the experienced actress’s instinct to fit the gesture to the word, by a movement of her neat hand she indicated the room through which she had just passed.

This is the kind of writing I abhor; not because of the old-fashioned wordy style (which does take a little getting used to — Maugham wrote many many books back in the early–mid 1900s). Rather, it’s the explicitness of the description that gets my goat. It’s probably the easiest way to spot the terrible writing in books like “Di Vinci Code” — everything is spelled out in excruciatingly unnecessary detail. (Note it's the ‘unnecessary’ there that's key word; I do like books that have lots of words.)

But Maugham isn’t a bad writer. As the novel progressed, it turned out that these passages reflected an inner dialogue of the main character, an ‘actress’ (that word’s not politically correct these days) who is self-centred and shallow; while we do empathise with her emotions, the writing style is almost a parody of her self-view and re-inforces her vapid interpretation of the world.

As the book progresses, we are slowly treated to some outside interpretation of who this woman is, and their points of view jars or even contradicts with what we’ve learnt through her eyes. So to dismiss this book early would be a mistake, because it’s only over time that the writing style reveals itself as a device to give insight on the character followed by the story. By the very end, her own plot lines (in her world) have been satisfactorily resolved while the insight on her character has completed its descent from grace to emptiness. Or is it us all who are empty and meaningless?

2007/11/21

‘On crappy reviews’

Simone Manganelli writes on crappy reviews at Technological Supernova.

I’d like to deconstruct [Andy Ihnatko’s Zune 2 review] to point out how typical this is of mainstream technology publications.

It’s good stuff; it captures exactly the sentiment I hold for the majority of reviews and general ‘tech info’ I read around the place. My writing certainly doesn’t stand up next to that of a journalist proper, so I’m one to talk, but all I really want to see is a story. A myriad of details out of context don’t help me form an impression of the device through the eyes of the reviewer. I want to know what you liked about it, or didn’t, and why — under the proviso that you’re well-versed enough in the field of whatever you’re reviewing that your ‘whys’ can be considered half-way considered and, even, objective.

It’s not fair to Andy to use his piece as an example; he’s certainly amusing at best. Don’t even get me started on the people who spread half-truths and pessimism around simply to get the rebuttals. Rebuttals equals page views, you see.

Can we get a moratorium on further useless technology articles? Please?

I wish.

Obviously the solution is to avoid reading reviews from people and places that you’ve previously discounted. And only read the people you know to provide the good stuff, information-wise. News will travel through almost all sources so you don’t need to subscribe to the mainstream ones in the first place unless you really want the firehose of information.

The real trick is to find someone who aggregates current affairs (of any kind) so that you’re only presented with material that passes through their quality filter (under the assumption that were you to perform a similar task, your lists would largely overlap). Sadly, such people are few and far between.

But really, the solution is just to ignore the crap. Seriously — there’s too much else to do :)

2007/10/27

I wish I had an Apple TV

About a year ago Apple pre-announced a product that is now called the Apple TV. It simply provides an interface through your TV to the media that exists on your computer. Movies and music on your computer can be displayed on your TV. Your computer doesn’t need to be hooked up right next to the TV with that adapter that you always lose.

When it was initially announced, I loved the idea of this product. I listen to my music through an Airport Express, which streams music playing in iTunes from my computer to my stereo. I buy my music predominantly through iTunes and the whole system works very well. Extending that metaphor to video is natural. What makes video on the Apple TV better than music is serialised TV shows through the iTunes Store. You buy the whole series and they’re transparently downloaded to your computer the day they’re broadcast on TV.

No hassle with ads or having to schedule TV at an exact time every week. A TiVo does this too (but we can’t get it in Australia) but, crucially, a TiVo can’t record what isn’t shown on TV. This also means back catalogues, currently the domain of DVD sales. Bittorrent will give you all this and more, but it’s illegal and not as easy to use as iTunes. (Remember, I’m talking about the Apple TV as a product to market to people.)

To summarise how the Apple TV fits in with Apple’s product lineup, it’s essentially a ways to consume content from the iTunes Store. If you rip your DVDs onto your computer, that’s fine too but the raison d’etre is to make people buy series and movies through iTunes.

The kicker of all this is that I don’t have an Apple TV. For one reason only: it doesn’t have a composite video output. That’s the yellow cable in the red/yellow/white trio that used to be the standard for most video/audio connections. My TV is pretty damn big and old, and it only supports composite video. So that rules me out from the get go. How many other people are in the same situation? I guess according to Apple’s market research, not that many.

I’ve seen slews of requests for more features for the Apple TV. They’re all pretty similar to the recent article “Apple TV future” at Apple Pulse. Recording and DVD drive, so they say. That’ll make people stand up and buy this thing. Bollocks. I want an Apple TV so I can avoid regular broadcast TV and eventually ditch my DVD collection.

I don’t think something like the Apple TV will be a big seller for a long time. How many Airport Express units were sold on the basis of their wireless music streaming? Over the next five years I think there is a market to be created, however. Don’t forget that the iPod took that long to become a monster, after all.

The Apple TV in conjunction with the iTunes Store is a platform with great potential (either one on their own requires an equivalent to the other). I’d be happy to align my allegiances to any other company that was doing the same sort of thing, too. I’ve got a soft spot for Apple, though, and there’s no other big company that is approaching it like they are (as far as I know). It’s a travesty that many of the content owners aren’t going along for the ride, but I desperately hope that this is just a matter of time.

Revisiting code

Michael Mccracken said it first:

Once you get a piece of code to the point where you believe it works - it’s passing its tests - go back over it and edit it. That is, go back and edit it for clarity, flow, and style. Just as if it were an essay.

Les Orchard at 0xDECAFBAD (love that site name) said it better:

Ugly code kills motivation and comprehension

There’s a curious tension between the “it works; ship it” mentality and those that say “this code isn’t perfect; it isn’t ready”. I don’t have a background in computer science, so the code that I ship tends to be the “it works” variety; over the months and years I’ll see ways to do things better (often through bugs cropping up that shouldn’t have happened in the first place) and the code base improves piecewise.

But one thing I have noticed is that if I write really nice documentation (of both the user interface and the code itself) I’m more inclined to go back in and start messing around with little tune-ups. LaTeX’s docstrip is ideal for this because you can freely mix up code and documentation (and even present the code asynchronously). Some of my time might be wasted by choosing the font that my code is presented in, and nicely explaining my algorithms with carefully typeset figures and tables but the up-shot is that it makes things a lot more accessible for someone to edit in the future. And that includes me.

(Oh, when I say that I don’t mean it’s a good idea to program in LaTeX; it’s rather hideous, actually. But docstrip doesn’t have to be used for LaTeX programming and one day I’ll experiment with writing other code with it.)

2007/10/23

Technical libraries for technical times

I hope that libraries of technical information are going to be unrecognisable in the future. And I hope that information is going to become globalised and centralised. This post marks the first solid thoughts I’ve been able to put together on some ideas I’ve been vaguely musing over in the last few months, I guess. The themes of these ideas are about how we are taught and how we learn and how we research. Obviously, my viewpoint is going to be very biased to my experiences, but I hope that my ideas here can eventually be generalised.

There are several projects around the place to create open centres of learning, an initiative that I strongly support. Unfortunately, the problem so far seems to be that it’s an incredible task to put even a semester’s worth of learning material together, and few people are creating content for these websites. Examples are Wikiversity, Wikibooks, The Open University and Connexions. Browsing through these websites reveals extraordinary nuggets of information completely out of context, and shows how very far we have to go before it’s possible to access learning materials for an entire discipline like Mechanical Engineering (to use something I’m familiar with).

(Note that these projects vary compared to Google Books or OpenLibrary or even pioneer Project Gutenberg, which all simply collect books without linking them together nor providing facilities to create new or edit current books. Both kinds of project have their place.)

But there’s more to the problem than getting a thousand engineers to write a thousand books and calling it a day. When we say we want open content, that’s not enough. The content has be written for a purpose and needs to be written differently depending what it’s being written for. If we could imagine the ideal case where everything we wanted to know was linked though a giant library, how would we be using that library? I break it down into three categories: learning, reference, and research.

The similarities between learning and reference is much greater than between reference and research. (Indeed, I’m not even sure about the “research” layer at this stage. More on that later.) Much of their content could even overlap. But whereas a reference book will be explicit and terse, a learning book will have analogies and examples and tutorials and may very well skip the detail that makes a reference book what it is (dry and boring — no, I jest).

But remember that we’re no longer talking about books any more. This information would exist in “blobs” in the library to be chained together in whichever order made sense for the application. Control theory is widely applicable over at least mechanical, electrical, and chemical engineering, but the teaching methods between them can often vary considerably. Similarly for the more fundamental maths that underpins the more rigourous engineering subjects.

And this chaining, I feel, is one of the fundamental advantages of a central store of information. Places like Wikiversity might have modules that are related to each other, but the best they can hope for is a cross-reference to link them together. It’s impossible to reduce science into such small pieces of “things to know” that they can be placed in a linear fashion and be absorbed all at once. There are branches, dead-ends, intersections, and circular loops that defy any canonical reference. For different applications, different references need to be written. By chaining blobs together, not only can material be re-used efficiently, but consistent terminology can be used across all scientific disciplines.

Greater abstractions can only be built on top of steady foundations, and as more and more becomes known about the world we’re approaching the limits of what we can learn in the four or five years we’re given as graduate researchers. And this is a where that “research layer” I spoke of earlier comes in. Every new research student, guided or not, will follow a literature trail in the subject of their thesis. Their evolving bibliographic database is a representation of the “information space” their have mapped by the research they’ve managed to find, and they’ll proceed to carve out their own little niche in that space.

I’ve observed in my own research that my literature search is never complete. And it’s obvious reading others’ papers that theirs never is either when you find similar papers published years apart. Sometimes all I want to do is catalogue as much research as I can find, and this is where the seeds come from for an idea of a framework for documenting ongoing progress. Why should two researchers working on opposite sides of the world have to replicate each other’s journeys into finding work in their field that’s been done years before?

I’d like to see “literature review” as a giant web of cross-references which differs from a reference library in that old work won’t be forgotten, exactly, just hidden away behind the newer work that encompasses it. When a new research book is written, it can cover years of work in a field for which those papers are now, in a sense, obsolete. This resource would allow “forward linking” for random papers that you stumble across so that you can easily follow what research might have come out from that work. And if none — is there scope for more research?


All of these ideas have been glummed together over the last while as I’ve had time to tack them together. The concepts are muddy in my head and I’m not even sure how feasible this project is. Perhaps it’s impossible. Probably it’s impossible, at least today. I’ve got many more ideas and details in my head but I’ll let them ferment for a little longer.

2007/09/30

iPhone complaints complaints & Microsoft's platforms

Back in January I wrote a few things about perceived criticisms of Apple’s then-unreleased iPhone. As an aside, at the time I wrote:

I’m predicting that when this thing’s released, or thereabouts, Dashcode will be able to create restricted widgets for it. (By “restricted” I’m saying no Cocoa.)

And I was clearly wrong on that part (I don’t even count Apple’s “web app” development platform as falling into my prediction above, despite having the same spirit).

I’m no longer running tracking software on this website to see how many people read what I write, but that piece has obviously been my most popular, with occasional comments even to this day. They fall between in tone between offensive statements that I can’t really understand to people with valid things to say (and thanks for that, to those who’ve written).

Today’s comment deserves a reply:

Yawn.

This doesn’t even come CLOSE to what the Windows mobile devices are capable of.

This is true, and yet, the iPhone is way more popular now than a Windows mobile device has ever been. And the Windows mobile platform has been around for years.

In the same way that their tablet computer never really took off, I think Microsoft’s problem is that they build feature-rich, flexible platforms but by the same token never have a compelling hardware/”killer app” reason to really engage their customers.

Tablet computers, by rights, should be taking the world by storm right about now. The hardware is mature and we’re in a pretty sweet spot right now with fast and cool processors from Intel, that also yield excellent battery life (compared to past models). Looking in the near-term future, new display technology (LED backlighting and OLED displays) and flash-based storage will provide significant advantages over what is available today.

Microsoft has seemingly performed miracles with its handwriting recognition software, and being able to sketch out diagrams — and other free form input — where-ever the need takes you is obviously the major advantage that pen+paper or chalk has over a traditional laptop. And yet, no-one’s buying them? How are the sales figures? How are the prices? (I guess that’s the most important question.) It makes me wistful, because I really would like a tablet myself but I’m obviously not going to buy a Windows computer.

Personally, without having used one, I’d guess that Microsoft haven’t gone far enough to develop an interface to general computer use that really takes advantage of the fact that you’ve got a huge touchscreen to connect with the data in front of you. It’s not like Windows is that great to use anyway (cheap shot) but there’s untapped potential there.

Coming back full circle, as with the iPod vs. various PlaysForSure-based media players, it’s not about the features but about the interface. This point has been made by various Mac-biased writers for months now, so I’m adding nothing to what’s already been said.

And finally, going back to that original comment: yes, if you’re happy with the Windows mobile platform, it’s hard to argue. Expandable storage, a legitimate 3rd party software development community, more more more features — it’s not for the everyman but it sure could be for some.

2007/09/29

Why typography?

When I try and explain to people that my hobby is typesetting or typography, it’s rather hard to justify. Especially to engineers who might not appreciate the æsthetic reasons in the first place. (I kid.)

Now, I’m not going to attempt such a justification now besides saying that my primary reason is that it helps people. But being interested in typography in the first place comes from some part of my, ahem, soul that tries to cling to the idea that perfection should be achieved where possible simply for the sake of doing so — and the subsequent benefits will be evident.

Well, here’s some small consolation for me. A while back Amar at UIScape referenced a research article looking at the effects of “fine typography” (not macro-level typesetting like linewidth and fonts but rather the micro-details like kerning and ligatures) and found that while the reading speed and comprehension and even preference between the two samples were equal, significant effects could be found in other areas:

[P]articipants turned out to frown less, and could therefore be said to have been “happier”, when reading text with the enhanced typography. […] [P]articipants who read text with good typography did perform better on [creative problem solving tasks after they had done the reading].

I find this amazing and it pleases me no end. I hope that more studies like it will corroborate their findings, but for now I can confidently state: “the documents I create will make you happy”.

2007/09/28

My first steps with open source licenses (& LaTeX)

I’ve finally got around to learn a little bit about open source licences the other day. The whole premise seems easy enough: I write this code and don’t put restrictions on it for other peoples’ use. But the devil’s in the details, and there was a lot to get my head around at first. This is a short summary of what I’ve learnt (or, at least, what I think I’ve learnt).

First thing’s first: it’s a Bad Idea to make code public that doesn’t have a licence. You will be legally responsible, theoretically, for any bad things that happen resulting from others using that code. Secondly, it’s Not Possible to release code “into the public domain”, although many people claim to do just that in an attempt to obviate their copyright responsibilities. Copyright is automatically assigned and it’s legally murky ground to attempt to get around that (and varies from country to country in how successful you will be in that attempt).

It’s easy to say “well, my code will never be used by anyone else anyway, so it doesn’t matter if I don’t release it with a copyright licence” but that’s a little short-sighted. It wouldn’t be public if you didn’t think that anyone would find it useful, and if someone wants to re-use what you’ve written, the absence of a licence will prevent them from doing so, even if you’d like them to in principle. Furthermore, the absence of a warranty (again, theoretically) could get you in hot water if things turn out poorly due an error on your part. So free code must be licensed.

The question is then “which licence to use?”. You wouldn’t think this would be such a problem, but there’re heaps to chose from and many of them are quite similar. Making a good choice without knowing the details is more a matter of luck than anything else. Over at Google Code Project Hosting, they’re trying really hard to restrict the number of open source licences around by only offering a small number of choices for the projects they host; a laudable goal. And yet their list is still eight deep. Even if you want people to use your code essentially without restriction, there are three to choose from: the BSD, MIT, & Apache licences. Which to choose even in this simple case? I’ll discuss their differences five paragraphs hence.

There are three broad classes of open source licence that can be summed up by three specific “best practice” ones: the GNU General Public Licence (GPL), the Lesser GPL (LGPL), and the Apache Licence. The GPL is probably the most well known and popular free software licence: it requires that the work be distributed with its source code and stipulates that derivative works also follow the GPL. This ensures freedom at all costs, with the expense of flexibility; you’ll never see GPL code turn up inside proprietary products (illegal exceptions notwithstanding.

The LGPL was written to allow proprietary software to use the functionality of GPL-like free software without having to open the entire product. A library with the LGPL licence can be used in a closed product without having to open the source for the whole project. I won’t really consider this class of licence too much here (the Mozilla Public Licence is similar). Suffice it to say that it’s a slightly more liberal license than the GPL for certain types of software.

Finally, the Apache license is a model example of a license that lets you do pretty much anything you like with the code. Not only is the code free, but it can be re-used where-ever you like, under whatever license you like. There’s an obvious tension between a “copyleft” license like the GPL and an Apache-like license: for the former the code is free and will always be free; for the latter, the code is free but someone might take it, improve it, and lock it up — which doesn’t help you any but you do allow it.

I’m in the Apache licence camp more than GPL: I’d prefer my code to be maximally useful to as many people as possible than restrict its use in order to ensure that it will “always be free”. Of course, if everyone used the GPL then that wouldn’t matter, but that’s simply not going to happen. I might change my tune if my coding were more directly useable in commercial products, however. I can certainly see the idealistic appeal of the GPL. (While I’m on the matter, the GPL recently had some major changes made for v3.0, and it’s apparently rather controversial. I don’t understand the whole matter at this stage so I’ll leave the intricacies of this licence for another time.)


If you don’t want to choose the GPL for similar reasons to me, let’s revisit the question “which licence to choose?” and discuss the differences between the various (popular) Apache-like licences. The distinctions are subtle but there are valid reasons for choosing between them. As mentioned, the big three are the BSD, MIT, and Apache licences, where the latter is a later and more formal extension of the ideas in the other two.

The MIT licence is the most simple: you can do whatever you like to the code (distribute, sell, modify, relicense), provided that “The [ … ] copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.” Even the text of the licence itself can be changed.

The BSD licence adds one condition on top: “Neither the name of the [organization] nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.” Sounds sensible to me.

The Apache licence is the one that I’ve been implicitly endorsing when I used it as the “best case” example in the beginning for these “no restriction” free licenses. I’m really taking a cue from Greg Stein of Google who says:

That is one of the reasons that Google chooses the Apache License (2.0) as the default for the software it open-sources. It is permissive like BSD, but (unlike BSD) actually happens to mention the rights under copyright law and gives you a license under those rights. In other words, it actually knows what it is doing unlike some of the other permissive licenses.

(Not necessarily an un-biased comment, I have to admit; he’s also Chairman of the Apache Foundation.) The additional terms in the Apache licence (over BSD & MIT) require changes made in modified works to be prominently marked as such. I like to think of such measures as “enforced politeness” — it’s not like people won’t be doing this in general anyway. I believe that the Apache licence itself cannot be altered, but I don’t actually know for sure.


Finally, the reason I got into all of this is from the various bits and pieces of LaTeX code that I have written. And they’re licensed under the LaTeX Project Public Licence (LPPL), which is different again to those I’ve already discussed above. It’s pretty interesting, and I think it deserves a little attention. (Link disclaimer above: at time of writing some of that Wikipedia page was written by me.)

Because LaTeX code almost always defines a document syntax (it’s a programming language of communication, essentially), it’s pretty important that things don’t change meaning without warning. I want a document that is typeset on my machine to be exactly the same on your machine under reasonably similar circumstances. While LaTeX is free to modify and distribute, they don’t allow people to take the code and alter it without potential users knowing that it’s not canonical. This follows the original licence of TeX itself, probably the earliest piece of free software still in use. (According to Wikipedia, Emacs was first released in 1984; development on TeX started in 1977 but the version most similar to the one we know today was released in 1982.)

To try and formalise TeX’s licence, the LPPL allows modification and distribution only under the proviso that the user is made well aware that they’re using a modification to that work. This is usually done with a change in name, but technically speaking minimal conformance could be achieved (and strongly frowned upon) simply by printing out a message on the console stating that the package you’ve loaded isn’t the original version. A good example is a conference proceedings document class, for which you certainly don’t want someone changing the margins or fonts without calling it something different!

So if only the copyright holder is allowed to make changes to the code without changing the name of the package, what happens if the original author loses interest in or can no longer work on the project? The LPPL also defines the concept of a project “maintainer”, who may make public changes to the work with the authority of the copyright holder. You can become a maintainer of a project either by being bestowed the title or (when the previous maintainer cannot be contacted) by announcing publicly your intent to start maintaining the code; maintainership falls to you after three months if your claim is uncontested.

None of this changes the problem of ensuring backwards compatibility in packages, but it goes a long way to ensure that documents remain portable into the foreseeable future. This is a laudable goal when compared to the philosophy of “closed source” document programs like Word Perfect or Microsoft Word, whose old files are sometimes now unreadable.


Now, in my explanations above I have omitted many specifics in order to try and get the ideas across about the licences I was talking about. Diving too deep into the legalese makes it impossible to get a broad picture of each licence to be able to compare them. Obviously, I am not a lawyer and my terminology could be improved but I hope that I got the gist across. (Also, I hope that I’ve understood it correctly myself!)

I’m using the British spelling for licence and license here (for noun and verb respectively; cf. practice & practise — I remember these rules by the mnemonic “ice is a noun”). When I talk about licences above, I’m referring to their current versions: 3-clause BSD, 2-clause MIT, LPPL v1.3c, Apache v2. One day I might understand the difference between GPL v2 and v3, but not at the moment.

2007/08/18

‘Perfume’ by Patrick Süskind (1986)

I have been receiving suggestions to read Perfume for about ten years now, I reckon. Recently it was published as a Penguin Red Classic and I grabbed a cheap copy to add to my huge list of books to read. I actually managed to fit it in on a flight between Brisbane and Cairns a few weeks ago on my way to the ICSV14 conference. That’s right. It was so damned good I devoured the whole thing in one go. To be fair, I read most of it on the flight and then finished it off when I got to the hostel. As far as I can remember, it’s the longest book (296 pages paperback) I’ve read in one sitting.

I loved this book. And for totally different reasons than I often love books. It didn’t contain any characters that I found particularly likeable nor whom I could even empathise with. And the actions of the characters were never noble nor life-affirming. This isn’t one of those books like ‘Peter Camenzind’ (Hermann Hesse, 1904; my thoughts forthcoming when I re-read it one day) where the life of an everyman unfolds before your eyes and connects you with humanity as their character advances spiritually through life. For example. Despite all of this, there is an uplifting catharsis that arises quite unexpectedly (to me), which is quite unique for the genre that — on the surface — this book appears to fall into.

But the world that Süskind creates is simply amazing, and the construction of the narrative is simple and clever. There are no loose ends and no logical gaps in the story. In short, a ‘perfect’ novel. And while I wish I was the kind of person who could untangle the themes within an elucidate them now, I must be content to bask in them. I guess that’s why I’m trying to write here, so that I learn to express my own reactions to things.

I was talking with Toni the other day about the movie that has recently been made on this book. I haven’t heard a single good reaction about the movie, and I’m fascinated by the fact that the movie was even made. The book spends much of its time in a world that can’t really be shown on film, and so while the superficial ‘action’ of the story could obviously be shown, I’m baffled by how the motivation of essentially the only character could be portrayed. Or how someone could even try and approach the problem. So I would really like to see how the book was adapted.

Now, I just spent a couple of minutes reading the reviews at IMDB and the reactions there seem rather good. So that gives me more motivation to actually check it out. Because I do love movies, after all. Stay tuned.

2007/08/12

‘Places like this’ by Architecture in Helsinki (2007)

I don’t write much about music, because I don’t really know how to put the words together. Many reviews I read assume that the reader is familiar with the music itself, which isn’t necessary what I expect from a review.

I first became a fan of Architecture in Helsinki after being gifted with their debut album, ‘Fingers crossed’, shortly before seeing them live. This must have been 2003, I think. Until recently, their band had nine musicians who all played different instruments to one degree or another. Their first gig I saw in Adelaide had them crammed onto a tiny stage in the Jade Monkey with many more instruments than band members and having hardly space to move let alone swap instruments halfway through songs. They are certainly an eclectic lot. I really don’t know how to describe their music. Lots of energy and lots of instruments almost chaotically thrown together, with vocals primarily provided (often in falsetto) by their lead Cameron and taken over (and stolen) by partner-in-crime Kelly.

Their first album, which was a sweet, smoothly-produced and catchy number, and some time later, their second album ‘In case we die’ was a much bolder expression of their energetic and unusual sound, I guess. That album was less one that you could stick on as background music but captured better who they were as a band.

Around the same time, they toured Europe and America and I’m guessing become a lot more popular. (Well, Sven-S. Porst likes them at least. That’s my one and only data point for popularity outside Australia!) Since then, they lost two of their musicians who had more of a classical instrument bent, and just released a third album, ‘Places like this’. And it’s my favourite album so far. At a touch over 30 minutes I would like it to be a song or two longer. It continues the trend started in ‘In case we die’ of louder, punchier sounds. Cameron is crooning less falsetto and living it up a bit more. The songs are more catchy, the enthusiasm more unbridled, and the album more consistent. With a solid touring history now they’re much more guitar-based in concert and they’ve never been better.

I love you, Architecture in Helsinki.

2007/08/11

Out of practice

When I’m in that state of having nothing to read but not enough motivation to do work, I really need to spend the time writing rather than searching for more reading. For example, I just read in the New Yorker that there olive oil industry is rife with counterfeit oil; it is often cut with sunflower and soya oil (and sometimes treated to mask the flavour of the offending additive) for inevitable greater profits. That makes me wary about oil, I guess. Oh well; in cooking, I can hardly taste the difference anyway.

Now, the New Yorker is great. I bought a paper copy of it in an airport a few weeks ago to gauge whether I prefer it in print to online, and online wins hands down. It’s a whole magazine of current affairs and articles of interest, which can vary from fascinating to completely off-wavelength. Buying the print version gives you a good mix of both. But online, it’s easy to skip the chaff, and this makes it a much more valuable reference. Of course, I’m a huge fan of keeping articles I like in softcopy for future reference, and hardcopy just clutters up a garage in the end.

But undirected reading is hazardous to my time. I’ve hardly got time to do the dishes these days, so why should I spend time reading about how some olive oil isn’t just made from olives? Even worse, RSS readers transform collecting reading material into an imperative task: 45 unread news items! What have I missed? On the other hand, stopping by newyorker.com every week or so can easily be skipped. But my will isn’t strong enough to avoid checking my news in RSS, and I dread avoiding them for a week and coming back with hundreds of items that just might be interesting enough for me sift through the whole lot for.

I’ve cut down a lot recently, you must understand. On a typical day, I’ll only have a few tens, max, of articles to read or links to follow. This takes me less than half an hour, I’d guess, to wade through and discard those that I don’t feel inclined to consume. I haven’t measured it, really.

And back to my first point: I’ve become out of practice in writing here myself, although I have been doing so more on my actual research. (The thesis is very far from complete, however. It’s early days yet. But don’t tell anyone!) The pity is that I really like writing. If I forced myself to write every day, I’d be a lot better at going on at length in an interesting way — and of course my ego thinks I’ve interesting things to say in the first place (although I’d be inclined to disagree on occasion). But my interests can be rather myopic at times, for others at least, and I’d rather not harp on about news that is transitory at best and of dubious interest at worst. (Hey, did you hear there’s new iMacs? They’re cheap and pretty and great! I will probably buy one in a couple of months!)

So here’s to my literary career. Ahem.

2007/05/06

"Kafka on the Shore" by Haruki Murakami (2005)

I was introduced to Murakami by a good friend a few years ago after their return from living in Japan, and Kafka on the Shore is the third of his books that I’ve read. As always, it took me the better part of the novel before I was absorbed by it, and finished the second half in short time. In contrast to A Wild Sheep Chase and Dance, Dance, Dance, which mainly followed a single character, the narrative of Kafka on the Shore is spread over several characters who are equally dominant in the themes of the novel. Broadly speaking, it’s a coming-of-age story as we travel with the characters on their respective journeys. But such a description doesn’t do it justice.

Murakami has captured that style of writing that I associate with J. D. Salinger and F. Scott Fitsgerald, who are credited as his influences. I might not have drawn the connection so strongly if I hadn’t read about that, though. I think it was a quote on the back of one of his novels that said something like that he creates poetry in writing about the mundane, and I’ve never read a better description of his writing. These aren’t the expositions of the intelligentsia who reflect abstractly on the meaning of their life, in the style of say Hermann Hesse (my favourite author ever); rather, Murakami’s work is composed of the small details of his characters’ stories. They don’t strive or battle, they just live, and it’s such a base connection that allows us to empathise and achieve enlightenment with them.

The other hook into Murakami’s work is the incredible surreal environments he places his stories in. From my previous paragraphs, it would be assumed that his novels are set in a reflection of the world we live in, and its familiarity provides us the context for being drawn into their environments. Well, that’s not entirely the case. Murakami’s reality is indeed a reflection of ours, but a wonderfully expanded version of the universe we live in. At the same time, the unremarkable way he presents his surreal worlds make them imminently believable. The gap between his world and ours is totally seamless.

There’s nothing really to say about Murakami that hasn’t been said before. Kafka on the Shore continues his tradition of stories with deceptively simply storylines as we follow his characters through a surreal version of his Japan. Profound experiences are had by all involved. Including the reader.

2007/04/08

As tight as Kubrick

Justin disagrees with my taste in movies sometimes. And he invites me, who knows nothing, to discuss the following:

Last night we talked about the scene in The Shining where Wendy witnesses two men having sex, one of them in a bear costume. This scene makes no sense in the film; it is impossible to understand without having read the setup in the novel. Do you really think the inclusion of this scene constitutes tight editing? Separately, do you think that the star gate sequence in 2001 constitutes tight editing?!

Perhaps my use of the word “tight” in my previous writing about my first viewing of Paths of Glory isn’t quite right. I would probably prefer the word “perfect”, but that’s a word with less meaning in the context of something that requires subjective measurement. So, to broadly answer the questions as stated, I’d have to say yes and yes, but I suppose I need some just[in]ification for that position.

Even more than really intelligent movies, my favourites are the ones that have a strongly visceral affect on me while watching them. It’s not so much about the details, but about the feel.

More than any other Stanley Kubrick movie, The Shining dramatically improved for me with repeated viewing. I’m not sure why. When I first saw it, I wasn’t totally blown away. Shelley Duval’s acting might have been a bit of an influence in that, and the exaggerated accompanying score. But I think the more you watch a movie, the more you absorb the feeling of it as you pay less attention to the details in front of you (like the dialogue or the acting) because you’ve seen it before and you know what’s going on.

This particular idea was made especially clear to me after watching Amèlie a few times; I ended up not even reading the subtitles any more, despite speaking no French, and just went along for the ride.

Now, I find it a little strange to call The Shining out for the incoherency of its scenes, considering the subject matter of the film. I haven’t read the novel and I’ve got no idea what the significance behind the bear suit men having sex is. (1:41.54 into the movie, by the way.) But why is it being “impossible to understand” such a bad thing? I don’t really understand most of the weird stuff going on in the movie — and it doesn’t affect the feeling the movie has. I’m gratified that there is thought behind the madness, because it means that Kubrick didn’t just make up something random and stick it in. There is an internal consistency that might not be visible to the casual viewer but which ties everything together. I guess I often call that the “texture” of a movie.

(A film that exemplifies the whole idea of scenes not making sense but the whole movie having a big impact (and with its own internal consistency that is hard to find) is Mulholland Drive.)

So, is it “tight” for The Shining to have this scene that makes no sense? Conversely, would you say that it isn’t a good choice to leave that scene in there? In actual fact, it’s only the ending few seconds of scene, which could easily have been cut out. And there’s a scene a couple of minutes later with a similarly meaningless hallucination. Without going too much into trying to put words into the mouths of Kubrick and Stephen King, I’d say that cinematically the random appearances work to increase the scare factor (and internally to the film, to increase Shelley’s character’s bewilderment), and thematically to show that it’s not just Jack that’s gone crazy; there’s something weird about the whole place. If you took out the bear man sex (and the other random guy that turns up a few minutes later), you’d be losing a particular point that the movie was making at that point, in my opinion.


2001 is the previous argument magnified. The whole movie is essentially only about the feeling. And the climax of the film is the star gate sequence.

Here’s a quote from John Gruber, which he said on Hivelogic Radio earlier this year (six minutes in). He verbalised, and made me realise for the first time why I like long and slow movies:

The whole problem talking about [2001] is that the point of it isn’t something you just say “oh here’s the point”; the movie itself is the point: it’s the way that it makes you feel. That’s the way when I was a little kid that I used to feel about all the movies I watched that were movies for adults, movies that weren’t really just kid movies. I’d watch it and be like “ah, I don’t really get this, I don’t understand it, it just gives me a feeling”. I think that’s how 2001 works, even for an adult, it’s more about how it makes you feel.

Personally, the star gate sequence is an example of a cinematic element that I love, but I know that many people can’t stand. Where long scenes contain only abstract imagery and the sudden absence of narrative sends my mind into a reflection of everything I’ve just seen. I find it a unique experience when a movie has been filling my head for a couple of hours, and then abruptly stops, leaving my mind to coast along in the direction it’s been pushed. After trying to encompass the whole feeling of the movie in a protracted instant, my mind then empties and there’s nothing left to fill the gap.

Less slow-minded people probably get over it in about five seconds and then get bored, so I can understand the lack of universal appeal. I also don’t know if that’s what the film maker is trying to do; I should get someone who actually knows about film production and editing to tell me one day. This is the reason, also, that I like to watch movies until the end of the credits roll, but it doesn’t always work at emulating the experience I described above. Sometimes it does, particularly when the music has been well chosen, and it’s just as good provided my companions don’t then immediately stand up and walk out of the cinema.

Could the star gate sequence be half as long? Probably. Twice as long? That’d be an awfully long time. Could it be only ten seconds long? Probably not. There’s no way of pinning down an exactly period of time that such a scene should extend for, and given the very slow pace of the movie as a whole (it’s only 1/3 dialogue, after all), I think it’s appropriate as Kubrick cut it.

So we started talking about tight editing and ended up with a vague and possibly pretentious discussion about how movies make me feel. Did I answer your questions, Justin?

2007/04/02

EMI rocks *and* rolls iTunes

Michael Gartenberg seems to have the scoop (is he allowed to, given his job?): “Apple and EMI have announced that they will be selling music without digital rights management”. But there’s more: “albums will be DRM free, have the higher sound quality but will remain at the same price point as current albums. Format is AAC and encoded at 256kbs”. That’s awesome.

[Update: seems I just missed the press releases and my weird Australian time zone means the press hasn’t caught up yet. Some tidbits from Apple’s announcement:

  • It’s not happening until May, but it’s worldwide.

  • There’ll be a “one click” upgrade button that’ll give you the enhanced tracks for the cost of the difference between the original price and the premium you have to pay for DRM-free singles. No word if this will update whole albums for no cost. Hope so.

  • Steve Jobs: “We think our customers are going to love this, and we expect to offer more than half of the songs on iTunes in DRM-free versions by the end of this year.”. Read: “You’d better buy the DRM-free tracks or we’re screwed. But if you do, a couple of the other labels will follow suit!”

  • EMI music videos will also be un-restricted. That’s great. I haven’t started buying music videos for a couple of reasons, but I love the idea of having my own playlists of them.]

To provide some context on the whole issue, Apple currently sells music from the four major record labels in the world, plus countless independents, through its iTunes store. The music they sell is relatively low quality, and comes with copy protection that precludes (a) sharing your iTunes-bought music library easily with your friends, and (b) playing your iTunes-bought music on any portable music player except the iPod. This copy protection scheme also significantly increases the chances that your music won’t play in fifty years time, if you still have copies of it.

Most other online music stores have similar restrictions, with a notable exception. The second largest online music store is emusic, which sells (better quality than iTunes) unprotected music from a large selection of labels excepting the big four (or thereabouts). I’ve been meaning to investigate these guys for a while, because I like buying things online due to instant gratification, and I’ve heard good things about their catalogue.

So I’ve been buying music from iTunes for a little while, not particularly fussed by the problems outlined above. The low quality was more of an issue for me, frankly, but that’s probably because I like Apple in general and don’t mind the iTunes/iPod lock-in. And I wasn’t aware of the quality issue in day-to-day use — the music sounds fine on my stereo and of course on my iPod — it was just that I knew in principle that if I hooked up a really nice system and looked for the difference, I’d be able to pick it.

I guess having lost one music collection so far on MiniDisc (way too tedious to transfer my music — recording the audio stream in real time onto my computer), I guess losing another set if my iTunes music became similarly too inconvenient doesn’t bother me too much. After all, you usually buy music before you know if it will be in you “top 10 of all time” list, and you never listen to everything in your collection with the same gusto. Well, in my case at least.

But that’s just me justifying it. Quality and longevity of iTunes-bought music has been the biggest problems with moving to the new media; why go backwards from CD, which works so well? So in one swoop, this pairing by Apple and EMI shows the rest of the music industry that this business model can work, and it should only be a matter of time before a lot more unprotected music turns up on iTunes. In fact, it should only be a matter of months before the independents follow suit (since they generally sell the same music, unprotected, on emusic anyway).

Very good news. Now, when are the other labels on board, when does iTunes become more (or truly) worldwide, and when does the same happen for video?

2007/04/01

John Searle, misguided philosopher?

It is unusual to run across a mention of John Searle while reading Scott Adams, because the only other place I’ve heard of him is in philosophical arguments dating back to the eighties.

In that context, he argued against the “Turing Test” for evaluating artificial intelligence. The test goes that if you can’t distinguish between a computer and a human in a text-only conversation, then the computer must be intelligent. The actually test, in my opinion, is meaningless because humans can trick the computer by escaping into the real world — which the computer can only compete with if it has a similar “life experience”. (And I believe it was never proposed as a formal test of intelligence, just as a thought experiment, so I’m not arguing against Turing himself.)

For example, adapting an example from Hofstadter, asking the question “How many syllables does an upsidedown M have?” requires that the computer knows about the shapes of letters and geometric properties like rotation, plus knowledge of the sounds of the names of letters. At this stage, the computer either needs this information given to it a priori by its inventor, a scheme which would never work in general for creating an “intelligent machine”, or actual understanding of such things. And the latter requires eyes and ears for interaction with the real world, at which point you’re looking more at a robot — and then consider problems of questions about the feeling of bungie jumping or eating too much Indian food. In essence, your computer needs to be able to fake knowing of such things, where you’re sure to eventually be able to trick it somehow, or build a replica human, which isn’t the point of the exercise — we want an intelligent computer, not an intelligent humanoid robot (although that would be cool too, obviously).

John Searle’s objection to the Turing Test lay along quite different philosophical grounds — that computers can’t think:

Suppose that, many years from now, we have constructed a computer that behaves as if it understands Chinese. In other words, the computer takes Chinese characters as input and, following a set of rules (as all computers can be described as doing), correlates them with other Chinese characters, which it presents as output. Suppose that this computer performs this task so convincingly that it easily passes the Turing test. In other words, it convinces a human Chinese speaker that the program is itself a human Chinese speaker. All the questions the human asks are responded to appropriately, such that the Chinese speaker is convinced that he or she is talking to another Chinese-speaking human. The conclusion proponents of strong AI would like to draw is that the computer understands Chinese, just as the person does.

Now, Searle asks us to suppose that he is sitting inside the computer. In other words, he is in a small room in which he receives Chinese characters, consults a rule book, and returns the Chinese characters that the rules dictate. Searle notes that he doesn’t, of course, understand a word of Chinese. Furthermore, he argues that his lack of understanding goes to show that computers don’t understand Chinese either, because they are in the same situation as he is. They are mindless manipulators of symbols, just as he is — and they don’t understand what they’re ‘saying’, just as he doesn’t.

(from Wikipedia, which seems a bit confused later on, stating

The Chinese Room argument is not directed at weak AI, nor does it purport to show that machines cannot think — Searle says that brains are machines, and brains think. It is directed at the view that formal computations on symbols can produce thought.

— but if there’s no contradiction there, it’s too lazy a Sunday for me.)

Obviously, to me, the “understanding and thinking” of the whole system must incorporate the actual rules that are being followed, rather than just the mindless object executing the rules. The fact that blindly following the rules allows the Chinese Room to pass the Turing Test (by assumption, no less) implies that the rules know a hell of a lot. My argument goes: either humans don’t understand anything, and neither can machines; or humans have understanding, and so can machines.

Anyway, I don’t like these sorts of arguments, because there’s so much terminology invented to argue about that the ideas behind them are a little obscured. That’s probably my layman’s excuse, though.


Back to where I started this whole thing. Scott references an interesting article:

[John Searle] is puzzled by why, if we have no free will, we have this peculiar conscious experience of decision-making. If, as neuroscience currently suggests, it is purely an illusion, then ‘evolution played a massive trick on us.’ But this ‘goes against everything we know about evolution. The processes of conscious rationality are such an important part of our lives, and above all such a biologically expensive part of our lives’ that it seems impossible they ”play no functional role at all in the life and survival of the organism”.

Scott then says:

Is it my imagination, or is that the worst argument ever? […] The illusion of free will helps make us happy. Otherwise, consciousness would feel like a prison. Happiness in turn improves the body’s immune response. What more do you need from evolution?

Well, I’m not sure if the link between happiness and immune response is direct and low-level enough to be used as a great argument in this case. Also, the argument might have been made worse after filtering through journalism-speak. But it does seem like a pretty poor argument. It is much more likely to me that the illusion of free will arose as a by-product of our ability to think ahead and think of ourselves in future situations — that particular skill that turned us from monkeys into bloggers. (Not that the difference is particularly noticeable in some cases.)

Not that we choose different paths of action to take based on what’s coming up, but we take what’s coming up into account in our actions. This means we had to have a symbol of “self” in our brains, which most easily is mapped to an “I”; and so we have consciousness. Since there are “choices” about what to do next, which involve our brain-symbol “I”, the illusion of free will arises because one of those choices is chosen. It’s just that we have no control over which choice to take, because that’s entirely determined by physics (in the “no free will” argument).

Clear as mud. Well, to John Searle.

2007/03/25

“Paths of Glory” by Stanley Kubrick (1957)

I haven’t seen as many movies as I would like. In fact, I own several movies on DVD that I just haven’t got around to watching yet. On the rare occasion that I find myself with a free evening and no objections (the movies I often want to watch aren’t universally appreciated, for some reason), sometimes I’ll sit down and actually watch one of the many many movies that I haven’t yet had the chance to.

One of the few Stanley Kubrick movies that I hadn’t yet seen, Paths of Glory sat in shrink-wrap for a few years before the opportunity presented itself tonight to be watched. And I should have watched it earlier, of course. Kubrick is known for the tightness of his movies (length and tightness needn’t be opposites), and this one is no exception. It’s interesting to reflect on these earlier pieces of his, where his style is still unmistakable but his infamous perfectionism isn’t quite as blatant.

This film is billed as “one of the greatest anti-war film ever”, and while that statement does sum it up quite neatly, there’s not enough context to describe what the film’s about. I guess the feeling of a war movie strongly echoes which war it’s covering. While more modern films such as Full Metal Jacket or Apocalypse Now begin in some sort of reality and descend into situational insanity, Paths of Glory takes a tiny look at war in the trenches and focuses on the futility of the whole situation. Two groups of people facing each other with guns and nowhere to go just can’t be resolved from the inside. But trying to escape the situation isn’t going to work either. Not insane. Just frustrating waste.

One of my favourite experiences is when the credits roll and all you can do is sit in silence thinking about the movie. Too often reality intrudes when I do this, but it’s really a moment to be savoured. It’s not any one part of the film, like great cinematography or a well written script. It’s when the movie evokes feelings that resonate past when the film ends, and you wish everyone could just share in that moment.

2007/03/14

I've never been so busy…and I'm rambling…

I’ve been so busy recently that my sleep regulator went unstable. But it’s my fault. I’ve got no excuses after a long weekend. I find it quite interesting the effects of tiredness has on my mental state, particularly the amplification of grumpiness. I find being grumpy fascinating, in that I acknowledge that I’m being completely unreasonable trying to blame everyone else for my troubles, but nonetheless try and justify it to myself anyway. And after blaming people around me for a while, I become depressed; it’s a well-worn spiral.

Luckily for me, my depression doesn’t last. So here I am, two hours later for uni already. MarsEdit has been updated so I can post things to Blogger again without having to go through their web interface (mostly why I’ve been quiet here recently, besides generally having no time). There’s a weird old Italian man in my shower, taking it to bits and putting it back together again, on account of an interminable drip you see. I can’t understand a word he says, but he’ll be there for a while.

I’ve got this experiment going on a uni that totally isn’t working at all. And it needed to be working a couple of weeks ago so I could write a paper on it. I’m kind of screwed, because the deadline is in two and a half weeks. Not a good time to be sitting on the couch at 11am in the morning in my dressing gown. Oh well, in the whole scheme of things it’s not that bad. After this paper is done, I’m going to start my thesis. And my personal theory is that you need practise for writing well, so it would be a bad idea for me to neglect this website any more than I have been doing.

To finish off, here’s some things I’ve learned recently that I found interesting. Pterodactyls weren’t dinosaurs; the word stands for any flying dinosaur. They ranged in size from 20cm to 20m wingspan. Twenty meters!? When we can build aircraft that can fly as efficiently as that, I’ll be happy. Ever heard those amateur model helicopters that are less than a meter long? Damn they’re loud. I want a mechanical pterodactyl.

So it turns out that acupuncture has some sort of scientific basis behind it, which I found very gratifying while having it done to me. (Terrible neck from too much ‘puter.) From what I can gather, “bad” areas along the spine come about from a runaway feedback loop involving the localised nerves and muscles in the area. The first twinge will result in muscles tensing up to protect the delicate nerves/spine/whatever beneath. This is generally beneficial when the spine is being normal. But when something in the spine is wrong, such as a vertebrate out of place, the tensing of the muscles can lead to further problems. So the nerves trigger a signal to protect themselves, which tenses the muscles, which triggers the nerves more, which makes the muscles pull tighter…you get the picture; in the end, the brain is essentially flooding the muscles with “hold tight” signals even when it would be better to relax. Acupuncture resets this somehow in a process that’s too complicated for me to understand. All to do with the impulse pain of the puncture overriding the slow response of the “twinge-tense” action of the muscle, as I understand it.

Anyway, interesting stuff. After the acupuncture, the mis-aligned spine still needs to be massaged into place. I wonder if resonating the spine (locally) with vibrations could make this easier (either for the practitioner or the patient) to fix. If you could excite the twisting mode between individual vertebrate, it should theoretically be relatively easy to excite them back into place. But maybe not — you might end up doing more damage than in the first place!

2007/02/11

Super (fridge) magnets

A couple of months back, Sven-S. Porst wrote about using neodymium magnets for a pinboard. Having something of a profession interest in the matter (my PhD is about a table that floats on magnets), I meant to chime in straight away about the same type of idea I'd had previously. Well, let's just say I've been distracted and/or busy over the last two months. While buying some magnets for some experiments a while back, I also bought one hundred 1/8" by 3/8" cylindrical magnets for the fridge: These are great because their length creates a magnetic field that extends relatively far from the end of the magnet (that is, they're strong even though they don't take up much surface area on whatever they're hold up), and they're also very easy to pick up from the fridge. If you've got especially large fingers, maybe the half inch ones'd be better, though. Now, these aren't your regular fridge magnets. I can stack thirteen of them end-to-end and they still stick to the fridge: I'm amazed how strong these rare earth magnets are, and how cheaply you can now buy them on the internet. Furthermore, you can buy huge ones, like these one inch spheres. Just be careful of any greater than about half and inch — at large sizes they hurt if your fingers get in the way, and they also are very brittle so their corners and edges will chip off very easily if they smash together. (Not to mention being hard to separate!) Speaking of smash, it looks like they've got even larger magnets now than they used to. A 4" by 2" by 1/2" block is just asking for trouble, if you can afford the $60 asking price (!). You could secure furniture with magnets that large. I'm happy enough with objects to the fridge for now, thanks anyway.

2007/01/15

iPhone summary of complaints

Well, there’s certainly a lot of discussion going on about the iPhone. The Apple one, that is. After the dizzying amazement of the keynote and its introduction, it’s now time for the grumblers to come out and complain about the thing.

Depending on which day you read the news, you’d either think that iPhone will be a fantastic success or terrible failure. I’m optimistic about liking it, of course, and the 2008 Australian release date gives me an imposed buffer against buying the first generation of the device.

In no particular order, here’s a summary of complaints people have had and my general justification for them. I haven’t made an exhaustive search for the various complaints out there, but I think I’ve covered the majority.

Slow mobile data speeds — i.e., no 3G — this seems most likely due to a conflict of interests with Cingular, whose 3G service touts non-Quicktime video and audio purchasing or streaming or something. What are the carriers doing trying to get into the actual content distribution game? They should stick to charging for bandwidth. In any case, Steve Jobs announced 3G plans himself in the keynote; expect 3G with the first iteration of the iPhone. This is especially likely as the iPhone migrates to Europe and Australasia, where 3G is apparently much more widespread than in the US.

Software keyboard — this is a non-issue, I think. This requires some explanation.

Finally, the dubious merits of sticking to the QWERTY keyboard layout over the years since the typewriter have actually paid off. Let me explain. The iPhone has predictive text, like a regular phone keypad. But how inefficient is a numeric keypad design, when there are many overlapping words for the same input? (home/good, fairy/daisy, golf/hold, among others more comical…) This happens because the statistical distribution of alphabet letters were not taken into account when assigning their positions on the keypad. E.g., a single key shares both S and R, two of the most common consonants in English.

By contrast, the QWERTY keyboard was designed to have adjacently used letters (statistically) at least two keys away from each other — because typewriter mechanisms would jam if two adjacent keys were pressed near simultaneously. (Note that comparisons with the Dvorak keyboard have shown that the QWERTY keyboard is no slower than any other design; it just makes your fingers move more, thus making its users more prone to RSI-like problems.)

Now consider the iPhone keyboard. Each press you make, unless you’ve tiny fingers, will likely cover a few letters inside some sort of blob shape. Spatial averaging will pull a single letter from this group. But not only does the iPhone have predictive text, to speed up entry (I hope it’ll have “pre-emptive” text, a term I coined for when you enter “unfort” and it auto-completes the “unate”), it also auto-corrects spelling mistakes. It should be able to do this very reliably because (a) it knows a subset of letters to look for replacing (i.e., around the letter it recognised with the press), and (b) words are statistically not very likely to have two adjacent letters in them. Words like “damn”, “through”, “poop”, and “qwerty” might be harder than most to spell correctly. Just slow down.

To round off the keyboard commentary, I wouldn’t be surprised to see some sort of convoluted 3rd party case that has a flap with little actual buttons to overlay the keyboard. In any case, I’m sure that the iPhone is significantly better than keypad predictive text typing; the advantages of having a software keyboard outweigh the downsides in my opinion at the moment.

That combination music player/phones are fundamentally flawed — That’s the worst argument I’ve ever heard for why the iPhone will fail (no link for the attribution). The argument is that phones and music players don’t mix; taking out earbuds to answer a call or swapping earbuds with a Bluetooth headset are unacceptable options, apparently. How do you reckon people currently answer their phone while listening to their iPod? This is actually a killer app for me — I frequently miss calls because I’m listening to music on the walk to work, and having the player fade out when a call comes in is one of the most practical things about this phone. (Not that that’s unique to the iPhone, of course.) It's not that music phones haven't been popular before now because there's anything wrong with the idea, it's because (a) they haven't been iPods (the ROKR doesn't count, either), and/or (b) because they've had crappy implementations.

No wireless syncing — Duh. You have to plug it in to charge, right? Until Apple starts using Splashpower or something, this is such a non-issue.

No modem capability — This feature would be obsolete in a year or two with ubiquitous EVDO and wifi anyway. The point of the iPhone is that it’s useful when you’re away from the computer. This is too much of a niche feature to worry about, considering the consumer market.

Non-replacable battery — Geez, I wish this “iPod crappy battery” meme would die a swift death. Are people saying they want to carry around a spare phone battery with them? Or even that they would? And this just isn’t a device that will live long past the lifetime of its battery (unlike, say, a digital camera these days). If it does, Apple will be more than happy (as with the iPod) to charge you for replacing the battery in their support centres. A closed case makes for a much cleaner design, both figuratively and literally. I’m more than happy with this so-called “downside”.

Poor battery life — I think this is too early to call. The thing isn’t even out there yet. And this is a first generation product. Having said that, I’m surprised to learn that the battery life of the (hard drive) iPod has only doubled over its 6 year lifetime. Although the modern IPods do sport a much bigger, brighter, and higher res screen. In any case, I have absolutely no problem with docking my phone every day for syncing purposes (in fact, I can’t wait have unified syncing/charging as the iPod).

Consider also what you get that partly causes this battery life. The three sensors that everyone loves; the 160 ppi display that text and video has never looked so good on (here’s baby steps towards resolution independence; especially see the zooming pinch that works in emails and web browsing as well as photo viewing). This is a screen the same physical size of the Zune’s, but with double the number of pixels.

And this thing is only 0.03 inches thicker than the current iPod. Motorola proved with the RAZR that pockets can fit objects that have significant width and height (like a wallet, surprisingly enough) but if you make it thin, it disappears. And the iPhone is thinner than the RAZR by more than a couple of millimetres! It could have a larger battery, but it’s not worth compromising the design.

And finally, most importantly, no 3rd party apps — Take a look at the main screen of the interface, with all the buttons. Down the bottom there are the “big four”: phone, email, Safari, iPod. (The icon for the iPod is going to need changing eventually: I predict iTunes-like music notes.) The top majority of the screen is taken up by other apps and widgets. Or are they all just widgets? There’s the rub. If the number of widgets were fixed, there’s no way that they’d be organised as a group of 11 in almost three rows with space for five more on the screen — without the intention of the possibility of adding more.

Apple has said no 3rd party applications, but widgets fall in a grey area. I’m predicting that when this thing’s released, or thereabouts, Dashcode will be able to create restricted widgets for it. (By “restricted” I’m saying no Cocoa.) Apps are a different story; Apple’s clearly tied that down to the four at the bottom of the screen. A very poor man’s Dock, if you will. But the Dashboard exists to be filled (currency conversion, anyone?) and it’s the addition of extra widgets that will provide the sufficient expandability of the device to be useful enough that the clamouring for additions is abated.

Perhaps, though, that this will only come in time with faster processors and more memory; who’s not to say that Apple’s totally maxed out the capacity of its resources? They might like nothing more than to let you add widgets, but maybe there’s literally only enough RAM for what they’ve already got on there. In any case, time will tell. The killer apps for this device are simplicity and interface; the built-in functionality really is enough for most people. I don’t think the lack of expandability, at this stage, is going to hurt sales one bit desipte anecdotal evidence to the contrary.


In closing, I think that people are misconceiving the iPhone because it is truly the first of its kind. There have been expensive smart phones before, but they have been marketed not as consumer products but as business tools. Work while you’re not at your desk. A very, very small minority of consumers have ponied up the cash and been excited by the prospects of developing Java apps for their phone. Consumers in general don’t want to spend money for their phone, and Apple is changing that. They already spend the money on an iPod. An iPod that makes phone calls and does sms-ing better than any phone they’ve used is a novel concept, but one that is a logical upsell on an iPod by itself.

Ten million units in the first year is a big number to be aiming for, but I think they’ll do it, and more. After all, 1% of the phone market is much less than than the Mac market share. The only stumbling block I can see is if Cingular tie the phone to some ridiculously expensive plan than simply won’t justify the consumer nature of the device.

2007/01/12

Wherefore art thou iPhone?

“Wherefore art thou …” is the most incorrectly quoted piece of Shakespeare I know. In this short piece, I wonder “why is the iPhone thus named?”.

A couple months back I wrote some thoughts about the rumoured iPhone. To be honest, after Cisco released their own iPhone product, I thought it a fair chance that the Apple iPhone was non-existant. Some of the rumours were sketchy enough that Chinese Whispers) would account for the confusion. Obviously not.

In my original post, I wrote that “it would be ludicrous to put aside the huge mindshare behind their most successful product and supplant the iPod with a superior device.” My prediction was that an Apple phone would be branded as an iPod. I was wrong, evidently.

I understand that Apple thinks that the iPhone is the next Big Thing. I don’t blame them. Considering the differences between the iPod and the iPhone — hardware, operating system, interface, design, the people working on the thing — I can understand why the product is viewed in isolation and introduced to the world the same way. It’s totally new, and deserves a totally new name.

But it’s not just a phone. Steve Jobs touted that fact during the keynote — (paraphrasing) “we’re introducing three new products today…a touchscreen iPod…a phone…and an internet communicator” (whatever he meant by that last one — perhaps it will end up having iChat installed as well).

Here’s my argument in a nutshell: the iPod has a wonderfully ambiguous product name, and it does more than one thing (now; i.e., play movies). The iPhone, by contrast, already is more than a phone, but its name does not reflect that. In five years time when we’re all carrying around iPhones but using them more for web access and music, won’t that be a little weird?

The fact that most people carry mobile phones and many people carry iPods should make it fairly evident that one day the functionality of the two devices will merge. I’ve heard of some very new mobile phones with hard drives that give a classical iPod a good run for its money. As it stands, the iPhone will now (very slowly) cannibalise iPod sales until the iPod brand no longer exists. By calling its new product the “iPhone” and *not the “iPod phone”, Apple has doomed its most popular product line ever. This echoes the demise of the Apple II after the Mac was introduced in the eighties, but it doesn’t have to be that way.

Has “iPod” become such a generic term that people view it exclusively as a music player and wouldn’t warm to the idea of also using it as a phone? Perhaps. On the other hand (to use a word that John Gruber popularised) the parlay from iPod to “iPod phone” is a piece of cake. And like I said, the iPhone is more than a phone, so why not call it an iPod?

My secret desire is the whole Cisco lawsuit thing will end up forcing Apple to change the name of their device to something like “iPod phone”. (Reports from the expo say there isn’t actually a brand name on the demo units.) I just makes more sense that way when you look at the device in its historical context. And many people will be using the thing more as an iPod anyway. But I can live with a bad name. It’s the actual product that counts, and, well, I think it speaks for itself.

2007/01/08

Why I want to buy an iMac

A few years ago, I bought a 12 inch PowerBook and said it’d last me until I finished my PhD. That is, I wouldn’t buy a new computer until I got a job. Well … I changed my mind. And over the last year or so, I realised I don’t want a laptop. As a Mac user, my choices are nicely limited: Mac Pro, Mac mini, or iMac. And it’s the latter that I am keen on. Here’s some of my thoughts revolving around this decision.

Why a new computer?

Firstly: why do I want a new computer now? Since I bought this 867MHz processor + 640 MB RAM machine 3.5 years ago, Apple has switched to Intel processors, and a comparable machine now is something like dual core 2GHz processor + 2GB RAM (and around $1000 cheaper). That’s approximately following Moore’s law with a fourfold increase (two doublings) in performance. And boy, does my computer feel pokey these days.

With Leopard coming along, I want to buy an iMac with the next product line refresh, in order to get the OS update “for free”. My notebook is simply insufficient these days and I’ll use two examples of why I think that: iPhoto is too slow and the lack of USB 2 makes transferring photos painful; and iTunes is compromised by not enough disk space, and exhibits poor performance — there’s nothing worse than iTunes crashing while streaming music during a party. I’ve also got an iPod shuffle lying around that I’d love to be able to use.

Which desktop?

Regarding the choice between those three computers, the Mac Pro is completely out of my league. The things I do at home don’t require the price premium for the best performance of the day. And yet I find the Mac mini distinctly underwhelming. Upgrading the mini to match features of an iMac yields a more expensive unit that is still inferior. I don’t want to go into the details too much, but for $2100 (iMac) vs. $2500 (mini) — rounding to the nearest $100 at education prices — the comparison just doesn’t match up, all else being equal with 1 GB RAM and a 20 inch display:

  • 2.16 GHz vs. 1.83 GHz Intel Core Duo;
  • 250 GB, 7.2 rev/s vs. 160 GB, 5.4 rev/s hard drive;
  • Embedded graphics vs. some sort of ATI graphics card.

I can see why Apple sells the Mac mini. If people are buying them, they’re making a killing. For $400 more, I can buy a Mini that’s slower, holds less, has worse graphics, and has no keyboard or mouse. Let’s be fair and shop around for a cheaper display and say the prices are equal. I’m still not seeing the attraction of the mini. It’s just not for me — or for anyone else (in my opinion) also buying a display.

So why not a notebook?

Several factors have led me to dismiss the idea of using a notebook as my primary computer. The first is data loss: Mac OS X 10.5 will have in-built backup (“Time Machine”, probably not a permanent link unfortunately), requiring an external hard drive for mirroring purposes. I hardly backup at all at the moment. I’m scared of Bad Things Happening. Unless they start selling notebooks with two hard drives, I’m not using one as my main machine.

Rigging up a notebook with an external hard drive ties it to a desk (and is tedious as hell for various reasons), which brings me to my next point.

Notebooks cannot be used ergonomically. Either the keyboard’s in the right spot and the screen’s too low, or the screen’s at eye level but the keyboard impossible to type on. This means bad backs and headaches for long stretches of work. If you don’t understand this point, you’re younger than 25.

Another point is noise. My laptop is louder than most because the fans are old and need cleaning or replacing. But the fact that they drone on hasn’t been mitigated in the Apple’s current lineup — the CPUs used run too fast and too hot to be used without fans (but it would be marketing suicide to downclock them). I was very impressed reading an iMac review at silentpcreview.com (linked to page four):

With a maximum power draw of 63W, the iMac certainly qualifies as a low power system. At idle, the system drew 46W, which will qualify for approval from EnergyStar if their current draft computer spec makes it to the planned 2007 release. Even better, the system falls back into a low power mode after being left alone for a few minutes, dropping the power even more to just 33W. By way of comparison, the lowest idle power consumption we’ve ever seen from a custom built system is 36W — and that doesn’t include an LCD monitor.

[…]

The energy efficiency of the iMac solves the mystery of how it is able to get away with so little cooling. At first glance, the numbers don’t look that impressive, but keep in mind that all of these numbers include the power required by the LCD screen. Stand-alone LCD monitors typically draw between 30~40W from the wall, so we were quite impressed when the entire system managed to draw this little power.

(The low noise from low power consumption is equally appealing to the side of me that is concerned about the environmental issues of running a computer 24/7.)

Their testing showed negligible noise increases even with hard drive seek and full CPU activity. Especially when iTunes is running the music in the living room, I don’t want my computer making creating white noise. That simply isn’t the case with notebooks these days. Correct me if I’m wrong as I’ve had essentially no experience with the MacBook line.

Finally, screen size. This is a big one for me. I’ve never really used a Mac with more than 768 by 1024 pixels. A 20 inch LCD just sounds like a dream.

As an aside, I really like the idea of some sort of future portable that syncs data with a home computer, is very small, and doesn’t do too much. I might manage to critique people’s desires for an “ultra-portable” Mac notebook for Macworld before the event, but this post is taking long enough already.

So, in summary: with an iMac similar in price to a MacBook, the advantages of good ergonomics, easier data protection and a big screen easily outweigh the portability advantage of a notebook for me. I’m hoping for an iMac refresh sooner rather than later (moving to quad core would be better than I can dream) so I can justify buying one as soon as possible.