Last night I had the opportunity to see the latest film by interesting Spanish director Guillermo del Toro, whose previous work include The Devil’s Backbone, which I’ve seen and loved, and Hellboy, which I wanted to see but heard wasn’t the best movie ever (my standards have been high, recently; to be honest, I’ve hardly gone to the cinema much at all this year).
What I found interesting about this movie (I’m not so much into describing what happens — that’s for you to find out when you watch it) was the juxtaposition between fantasy and violence; think The Nightmare Before Christmas meets Saving Private Ryan (another movie I haven’t seen, sigh). This is an odd combination, and one that I would think appealing to a rather limited audience; nonetheless, it works very well for me.
On a less positive note, I did feel that the fairy tale aspects of the story were rather shallow and, worse, underdeveloped. When viewed from the perspective of the central character, however, this flaw can be partially justified. There was certainly no lack of imagination involved when putting together the violent imagery, or time spent showing it on camera; here we see where del Toro’s real forte lies (some of the violence is indeed deliciously gruesome). Greater balance between the two halves, particularly because the fantasy is where the main story lies and the violent reality merely just context for some of the characters, would have improved the impact of the film. In my opinion. If you love violence and aren’t so keen on fairy tales, you won’t have such a problem.
I was going to be critical of the acting of the fantasy character, the faun, in the film, but I read that the actor was American and couldn’t speak a word of Spanish. I really wonder at the rationale behind such a decision — if even an Australian watching the movie can be irked by his performance, surely it couldn’t have been that hard in comparison to hire a Spanish-fluent actor? (On the other hand, Ron Perlman makes a fine counter-argument.)
To a more positive note, the characters and acting for the rest of the movie I found particularly good. The main character Ofelia (played by 13 year old Ivana Baquero) is like a Spanish Miette, perfect to a tee. Even the villain of the piece, who is so easy to stereotype in a film like this, comes across with intelligence and flaws to counterpoint his evil.
Finally, it would be omitting one of the highlights of the film not explicitly praise its artistic vision. The reality in the film moves from dream-like beauty to horror, and the fantastical scenes extend from and mirror this. While I believe the interpretation of the fairy tale didn’t fit exactly with the balance given to the fantasy in the film, overall it is a wonderful vision.
Via macsurfer, I read at the Register that Motorola is shipping a 9mm thick (!) phone with an electronic paper display. I had no idea that e-paper was ready for this kind of thing — great news for the future.
I guess this is how new displays are going to sneak up on us. First it was OLED mobile displays, and now e-paper. It’s only a matter of time before they’re enlarged and LCD becomes obsolete.
I think this really shows how well Motorola have turned around in embracing radical new ideas and technology. And that they have a really good design sense at the moment — steering clear of the bulky over-featured phones offered by the other major manufacturers. This is similar to how I’d imagine an Apple phone, actually. Bravo.
Well, I had a pretty special weekend. A long while back, I discovered by chance that Yann Tiersen was going to be touring Australia, and that he was even coming to Adelaide. Then I discovered he was playing two days at the French Festival, and tickets were only $20 a day. The marketing wasn’t great — imagine if I had missed it? I was totally there, along with the man with whom I became a Yann Tiersen fan:
You could call us “Yann buddies”, I suppose. For those not in the know, Yann Tiersen is widely known outside of France for his soundtrack for the movie Amélie, which he somewhat lifted from his other albums. It’s a lovely piece of work, and the movie wouldn’t have been the same without it.
This concert was the best I’ve been to, because what I got was so unexpected. I had heard a rumour that he wouldn’t be playing live at all — it was a cruel trick and he’d be playing via satellite or some such (I mean, $20?! That’s like 10 Euro and a bit). And then others were less than keen because he was playing guitars, and his earlier albums were very much not rock; more classically inspired, I suppose.
So the weekend crept up on me over the months (having bought my ticket as soon as I could), and suddenly I was calling up Chris asking when he’d come by so we could get there. My denial of cars does make transport rather dependent on others at time.
And with no expectations at all, after a day of light drinking, heavy eating, and tremendous heat, there appeared an incredible band with some sort of progressive rock sound, but often to the tunes I knew much better as piano-accompanied. Not that the classical elements were all gone:
but damn did that band know how to rock out. Yann switched instruments between multiple guitars, violin, accordion, and mini-pianos (?) almost just because he could,
backed by a fat bass, cello (and other strings, albeit played like a cello), and second guitar, whose methods ranged between violin bow and electric drill to produce sound from the thing. It meshed so perfectly with the metamorphasised songs I knew and the newer songs I didn’t; I never imagined it, and it surpassed anything had I tried.
His skills on the violin and accordion really shone out. When playing those instruments, his whole physique would change, his face would soften, and his hands move faster than I could keep track — how can such music be possible?
As the first day was sublime, I made a trip of it on Sunday to repeat the whole event. And the second day in a row did not disappoint.
The only other thing I have to say is that his earlier music doesn’t make its mark on his demeanour; maybe I can’t see through the French exterior, but he looks far more haunted than I had imagined from his earlier music. The music direction I heard yesterday, frenzied and powerful, suits his face a lot more. But why should it be a beautiful man who makes beautiful music?
I regret terribly I didn’t approach and talk to him briefly, but what would I have said? In another lifetime…
I can’t keep up with the discussion on resolution independence that started a little while back and to which I added my thoughts the other day.
The IconFactory rebutted the claims about their vector image being composed of bitmaps — how else do we expect such effects to appear in icons? And fair enough. The field of raster image processing is far more advanced than vector; you need vector support for things like gaussian blurs in an image format before they can be included in a vector image. None of this is impossible (see Windows Vista) but it does require work.
Many people have been saying that high resolution bitmaps are all we need. And I challenge that claim. Again, I’ll reference Ian Griffiths; this time, see this article discussing Apple’s then-new 30 inch LCD display. In particular, note the visual artifacts shown in the “Click me!” button he shows; that’s the crux of the argument.
In a nutshell: bitmaps are fine, provided they exist by themselves. Icons, for example, possibly are best represented in some ridiculously large bitmap with a million pixels or so. All it’s going to do is sit on screen somewhere and presumably change size at various points. Nothing to worry about; decimation with low-pass filters (or whatever “smart” re-sizing algorithm you wish) works fine to rescale the image nicely (see Mac OS X’s Dock).
However, interface elements that have to align with each other must not exhibit scaling artifacts that are inevitable when dealing with raster images. “Resolution independence” implies that the sizes of interface objects can be specified in real world units (points, millimetres, inches, …) and render at the correct size. A bitmap has no provision to scale its internal features accordingly. What do I mean by this?
Say you’ve got the edge of a scroll bar rendered with blurs, transparency, whatever effects you like, and this has to mesh with part of the window that cannot be drawn as a bitmap of the same size. When these bitmaps are scaled, no matter how high their resolution, when it comes time to remove pixels from the image, rounding issues are going to affect which parts of the image stay and which are removed. Similar to anti-aliasing, a sharp hairline may end up a fuzzy mess if it doesn’t snap exactly to an integer number of display pixels. Bitmaps of different sizes will rescale and render such features in a non-controllable way. So you’ll end up with lines that change width when they shouldn’t or have off-by-one errors between adjacent interface elements.
And even with high-res displays, it’s surprising how much a “one pixel off” error stands out. This problem can only be solved satisfactorily (once and for all) if interface elements that must co-exist are all depicted as vectors and are sampled to the display resolution all as one.
To summarise: icons are fine as bitmaps. But there’s more to a graphical user interface than icons.
At this stage, I’d wager that the iPhone will not exist. But before I go into that I’d like to provide some link love for people that write better than I. These things only tend to happen when I happen to read similar ideas in dissimilar articles, especially since my del.icio.us client broke. It’s just a coincidence thing, but the (localised) convergence of ideas makes me want to write.
Everyone’s favourite chocolate-inspired developer, Scott Stevenson, writes about user interfaces, primarily concerned with the (perhaps) storm in a teacup debate about whether bells and whistles are really necessary. And provides the first reference I’ve seen in years for the (classic) Mac OS Oscar the Grouch trash animation. “Oh I love trash!”. Anyway, let’s see if I can extract the idea he talks about:
Up until the last few years, a Mac app with a nonstandard user interface usually came about because the programmer didn’t know much about the Mac. They didn’t see any particular problem with using a push button as toggle switch. […] The other major difference is that this new interface concepts are designed by people that specialize in it. […] This is in stark contrast to Unix developers in the past who would basically make educated guesses about user interface.
Hold that thought for later — arbitrary engineers don’t know interfaces. In an unrelated article on Apple’s possible “iPhone”, jesper from sweden writes about how phone software (that is, the user interface of mobile phones) is, give or take, atrocious:
I have a Sony Ericsson model that can nail a note to the standby menu screen, and the Nokia I used to have slapped the Sony Ericsson around the block when it came to the address book.
I just checked, and it takes greater than six button presses, after the message is written, to send a message on my Sony Ericsson phone. Back when Nokia became popular, it was because they managed to design their entire interface around two buttons (the big button and cancel) and two navigation buttons, up and down — and it was the simplest, easiest phone on the market to use. The self imposed hardware restrictions forced them to design a good user interface.
Somewhere in there, the whole idea of simplicity was totally lost, and within a couple of generations Nokia phones were no better than all the others, with up to five or six or seven buttons to do various things and then four navigation buttons. Is it any wonder that people want “just a phone” these days?
There are so many preconceived notions I have about mobile phones, from how ring tones are annoying, to how predictive text could be so much better — give me a damn complete dictionary and make it pre-emptive and that’s a good start. To how the whole communication model just kind of happened; no-one thought through the popularity of asynchronous conversations via text messages. Could you do the same thing with voice? (Google just implemented something similar with Google Talk.)
What I’m trying to say is that there is a mold out from which Apple could very much break. Apple could design a phone without a numeric keypad. Think about it. When do you actually have to type numbers into a phone these days? Is it often enough to dominate the hardware interface?
Finally, Brian Tiemann writes about how the iPhone might not be a phone, but just an iPod. And he nails it. Why would Apple release a separate product “the iPhone”? John Gruber wrote earlier this year “Apple’s only serious competition [to the iPod] to date has been itself.”
It would be ludicrous to put aside the huge mindshare behind their most successful product and supplant the iPod with a superior device. “Are you getting an iPod?” — “Nah, the iPhone is so much better”. Apple trademarks names it doesn’t use just so other people can’t. Just as they did with the “iPod photo” (!) and the “iPod video”, Apple will release the “iPod phone” or “iPod talk” whose functionality will soon enough become ubiquitous enough that the suffix is dropped. (Let’s face it; the rumours are solid enough that something’s going on.)
Apple doesn’t seem to think its customers will be confused by having multiple products, over time, with the same name. There’s no need for a new name. It’s not the iPod X34, replaced by the iPod GH87, with its baby brother iPod LMP331. They’re just iPods, and people, unsurprisingly, seem to prefer the simpler title (if they even notice the dichotomy in nomenclature).
In summary: Apple’s gonna make a phone. But it’s going to be an iPod. And luckily, this time round, they didn’t paint themselves into a corner with an overly restrictive name for their product.
And that’s a selection of thoughts in my head from earlier this afternoon.
There’s been some recent discussion on Mac OS X’s upcoming resolution independence. I’ve been interested for a while in this topic, but never managed to write about it much. Eighteen months ago Ian Griffiths discussed resolution independence in relation to Mac OS X’s upcoming support and what was already possible in Windows Vista. There’s a couple of good examples in there, for general interest.
A couple of people have chimed in on how you don’t want vector art, mostly because when you shrink it down you don’t get results as high quality as a hand-tweaked bitmap. There are two things here, exactly analogous to font technology. Long story short: you don’t want to shrink down a complex image too much, because you’ll lose detail and the smaller objects in the image will just disappear. Much better to design images for their display size; for example, thickening hairlines as the size decreases.
Over at the Iconfactory, they cover such points with another: vector art takes up more disk space when it’s complex. This is fairly untrue. They use the example of the same image in “vector” PDF and in bitmap PNG, with the higher-quality PDF a whopping 30 times bigger. And opening these files in an image viewer shows that displaying the PDF is far more processor intensive.
Seemingly damning evidence. However, zoom in real close to this image and you’ll see the reason this so-called vector image is so large: you can see by the individual squares of flat colour that it doesn’t use real vector gradients!
You can imagine that if this image is actually storing individual squares of colour at the size shown above (2000% magnification), then the file certainly is going to be enormous!
So what’s the deal? What has happened in this image is that their drawing program has rasterised the gradient into the vector image, resulting in an extremely high resolution texture in the image — resulting in the huge file size and slow processing seen. So in actuality, the image is not a true vector image.
To be a true vector image, the gradients themselves would have to be represented as vectors as well. Vector gradients cannot be as complex as the generated gradients as used in their image, but they are actually vectors. The display technology used to render the gradient would compute only the pixels being actually displayed in their subtly changing colours.
I’d like to conclude by quoting ssp, who has sensible things (as usual) to say:
I do think that using vector graphics in the process of icon design may be a good idea. Working with vector graphics seems to make people think more in terms of large structures than in terms of small details. And thus, for simplicity’s and clarity’s sake, basing an icon design on vector graphics may be a good thing to do.
Recent news out from Apple announce iPod integration in airlines. This would be pretty nice, although something tells me it will be a first-class feature only; sad news for poor flyers.
In a perplexing turn of events, however, it turns out that the announcement may have been a little premature. A Dutch friend informs me that two of the airlines, Holland’s KLM and Air France, haven’t signed up to any agreement and actually are rather perturbed about the whole incident. Dutch article here, or a Babelfish snippet:
KLM reacted however astonished to the communication of Apple. “it is correct that exploring conversations have been but the chance that it does not continue is now much larger than that it continues, however,” a spokesman of the Dutch airline company said.
(Ah, automated translation. I wonder if Google’s service works better; it’s not available, yet, to translate from Dutch.)
This is the kind of thing you don’t expect to come out of Apple, given their notorious reactions to companies that do exactly this kind of thing to them. Considering they seem to all be airlines outside of America, perhaps there were wires crossed somewhere over the ocean. Without forthcoming information, and you can guarantee there won’t be, all we can do is watch on in amusement and puzzlement.
Stumbled across wise man Umberto Eco’s website where he has an artice About God and Dan Brown that closes with:
I think I agree with Joyce’s lapsed Catholic hero in A Portrait of the Artist as a Young Man: “What kind of liberation would that be to forsake an absurdity which is logical and coherent and to embrace one which is illogical and incoherent?” The religious celebration of Christmas is at least a clear and coherent absurdity. The commercial celebration is not even that.
Good stuff. To add some context, earlier in the piece:
Human beings are religious animals. It is psychologically very hard to go through life without the justification, and the hope, provided by religion.
I wonder if this will always be true. Science as a religion is still new (the philosophers seem to be ahead of everyone else, as usual) and there are millennia of religious devotion to overcome. I’m keen on the idea of a religion of society (clearly, we can’t all become hermits so we need to organise ourselves around something); would it be possible to craft that well enough to fulfil the hopes of the everyman?
I like wine. You could say it’s somewhat of genetic trait; both my father and my grandfather have a “healthy” fondness for the stuff. I can’t speak for them, but I tend to find that I also can’t stop drinking it, especially when it’s half decent. I really cannot understand how people stop after a single glass.
Digressing for a paragraph; I’ve heard people say that in blind wine tastings of red and white at equal temperature, people only have a 50% success rate in guessing the “colour”. I don’t believe it in general, although I’m sure some reds and some whites do taste similar. There’s no way a shiraz and a riesling (again, half-decent) have anywhere near the same taste. But I do believe that most of the wine tasting experience is psychological.
Before today, I thought it was but a single glass of wine per day that was supposed to be good for you. But now I learn from the dubiously impartial
The key to reaping the health benefits of red wine seems to be moderate consumption […] In the US, drinking in moderation means one glass for women, and one to two glasses for men.
Well, two glasses is better than one. But the good stuff follows…
The “sensible limits” in the UK and EU are two to three glasses of red wine per day for women and three to four glasses for men.
Huzzah! And we all know who live better out of the Europeans and the Americans. Presumably, the Europeans drink their four glasses over the course of the day, not all at once. Also, I guess that’s not four glasses in one of these 900mL beauties, either. That’s one dem fine wine glass, yes sir.
Not that you’d drink that much from one of those anyway, but I’m guessing they do mean four small glasses.
If I drank about 2 bottles of wine per week, that’d be around 10 cases per year. Assuming I’m drinking half-decent wine, that’s about $1000–$1500 per year for scientifically tested (and delicious) health benefits. I wonder…
Brown rice is pretty awesome. It’s, like, tastier than white rice and better for you. So here I am, waiting to eat some…but the damn stuff takes so much longer to cook!
Anyway, while I wait, here’s my rice recipe. Easiest thing ever, really.
- Half a cup of rice per person
- Put in saucepan, add a dash of oil
- Fry until it’s hot and water added sizzles
- If white rice, add twice the volume of water as rice to the pot
- if brown, add thrice
- Stir the rice so it settles evenly under the water
- Heat on high until it boils
- Reduce heat so it’s not going to burn (but still bubble) and wait
- Eventually, the rice will soak up all the water
- Eat a little to check it's soft and not crunchy. This is very important. Add more water if the rice is not soft
- When cooked, if desired, stir through butter for extra deliciousness
I’m pretty keen on rice, but I don’t eat it enough. I wonder if mine’s ready yet now…
Around the beginning of the year, I started reading fairly frequently; at the time a mix of fiction and non-fiction. Then I started Tess of the d’Urbervilles by Thomas Hardy (Project Gutenberg link to the free text [footnote one]), and either ran out of time or ran out of momentum to continue reading. It’s taken me until now to finish it; as always, it wallowed next to my bed until it gripped me and I read the second half in a few weeks.
Apparently, this is Lucy’s favourite book, and after absorbing the whole thing I can understand the sentiment. It’s not my favourite book, but I do think it is very beautiful, and I recommend sticking it out to the end if any of my hypothetical readers give it a try but find it slow going in the beginning. It was published in 1891, and is written in a style slightly more verbose than I prefer. On the other hand, I like Edgar Allen Poe’s style, so perhaps it’s the Englishness about the prose that doesn’t take my fancy. But powerful prose it is, and there are moments of real beauty in the novel.
The only way I can describe Hardy’s best writing is by analogy with figure/ground. Rather than a description of something directly, he writes around the subject, with hardly a reference to it, and with such detail to give the mind space (and time) to fill in the details of what was left implicit. (I wonder if this was also to tread around delicate subject matter.) The most poignant scenes, at the end of the first part (Ch. 11) and the very end of the book, impressed me more than most writers I can think of.
Without sufficient context of what life was actually like in the late 19th century and the social setting into which the book was published, it’s a little hard to pull out the same messages from the text as were perhaps intended. The emotions and motivations of the characters are very real, however, and while the story has its anachronisms, it hasn’t dated. I found it interesting that Hardy, through the character Angel Clare, puts emphasis both on the lifestyle of working class and simultaneously espouses philosophy over religion.
When I read books sometimes I write quotations of small sections that strike my fancy. This first follows from what I was saying in the previous paragraph. From Chapter XX: (emphasis as original)
Their position was perhaps the happiest of all positions in the social scale, being above the line at which neediness ends, and below the line at which the convenances begin to cramp natural feelings, and the stress of threadbare modishness makes too little of enough.
(The internet tells me that convenances means proprieties or conventions.)
From Chapter LIII:
Thus they passed the minutes, each well knowing that this was only waste of breath, the one essential being simply to wait.
Finally, nothing profound but this one tickled my fancy given my profession. From Chapter XLVII:
His fire was waiting incandescent, his steam was at high pressure, in a few seconds he could make the long strap move at an invisible velocity. Beyond its extent the environment might be corn, straw, or chaos; it was all the same to him. If any of the autochthonous idlers asked him what he called himself, he replied shortly, “an engineer.”
(“autochthonous” is apparently “native to a particular place”.)
[footnote one: It’s a pity the typesetting isn’t as good as it could be.]
I was recently recommended Scott Adams' book “God’s Debris”, which may now be freely downloaded. This will presumably drive sales of the sequel, which is currently only available in paper form.
It’s a pretty short book, and very easily digestible. Turns out that Scott Adams isn’t just a terribly funny guy (via Dilbert), but he’s also an exquisite composer of thought experiments. He proposes a possible truth to the universe, ostensibly not as an explanation for anything but more to explore the issues involved. As someone occasionally on the fence regarding spiritual vs. reductionist beliefs, this is the sort of thing to swing me rather firmly towards the latter.
His descriptions ring especially true for reductionists that believe all things may be described by the sum of their parts. In his blog, Scott refers to humans as “moist robots”, which gives some sort of feel for the way he approaches the meaning of the universe in this book. His own views, of course, are concealed as unimportant.
Scott suggests reading the book with a friend and discussing the issues raised over a tasty beverage. I wish I had that luxury, so I’ll just say that I’ve read it (thanks Justin!) and hope that someone wishes to talk to me about it some time. As a philosophy to live by, it certainly fits in better than any other supposed explanation I’ve ever read. (Especially this one.) But the point is — I feel — that there’s really nothing we can do to validate any such claims and the best thing for everyone to do is live by a “religion of society”. If some people need fictions to adhere to such a philosophy, then that’s the way it goes.
Anyway, I promise I’ll stop being wishy-washy metaphysical. I’m just spewing thoughts into the ether while they exist in my head.
I’ve recently been telling people that it’s my goal to live ‘til I’m 150. Imagine the coincidence that Scott Adams expressed living to a similar date. Hence, a reminder to write something.
I was mainly inspired by some of Ray Kurzweil’s writing which can essentially be summed up by saying that technology increases exponentially and it’s impossible for us to conceive what’s going to be around in 20 years.
Case in point: I don’t think the step from a 8MHz Mac Plus in 1986 with 1MB of RAM no hard drive, and a 9 inch black and white monitor to a dual-core 2.16 GHz iMac with 1GB of RAM and a 24 inch colour monitor was exactly anticipated. Well, Moore’s law would have predicted it (doubling every 2 years is an order of magnitude increase in 20), but looking back 20 years gives a better perspective — we’ve come a long way.
So anyway, living to 150 gives me heaps of time. What on earth am I going to do with it? I guess to start I need to make sure that my lifestyle is actually conducive to living so long. And, well, I pretty much have no clue. I’ve started exercising some, which my physio friend assures me (if I keep it up) is a big win when I get to about 50 and my body starts deteriorating.
Eating well is what bothers me. I must eat relatively well compared to some, but I still wonder. Do vitamin pills do anything, or are they passed through the body as placebos? How do I know if I get the optimal combination of proteins and whatever? Is a vegan diet with supplements better than a moderated omnivorous diet? How much attention should I spend on organic food?
Et cetera. So I’m really hanging out for the day that we can have sensors living in various parts of our bodies informing us of these sorts of things.
* BeepBeep * You need more potassium. Eat a banana.
Or whatever. I’m also waiting for built-in heart monitors and GPS receivers, which have moderate uses for tracking exercise, but most importantly EEG sensors that let you know the optimal times at which to nap (and whatever else they’ll be able to tell us).
I figure with enough optimism, there’s not too much we won’t be able to achieve before I’m dead. People express surprise that I want to live so long. I don’t really understand that. What’s the rush?
This post came to mind months ago walking home one day, but I didn’t have the motivation to flesh it out at the time. The idea I’ll briefly discuss stems from when I read Dawkins’ book “The Selfish Gene” (which I should have summarised at one time but evidently didn’t — I read it after Daniel Dennet’s “Consciousness Explained”) wherein he essentially talks about how everything can be explained by how various evolutionary forces shape their creation.
Some trawling through the archives of Scott Adam’s “Dilbert Blog” re-awoke my idea, which I present below. There’s not much to it, really. I just found it interesting to muse on for a few minutes.
Religion seems like a mighty odd thing. I separate it into two broad components: the “supernatural” elements that help to “explain” things that we otherwise can’t; and (more importantly) various guidelines that help shape the behaviour of the religion’s followers. The question that I was thinking about was “why would groups of people believe all these crazy things in unison?”.
The explanation that someone just dreamed it up (probably believing it themselves) as a theory to explain something incomprehensible makes a fair amount of sense. I don’t think that covers everything though. Why would such beliefs be “evolutionarily stable”? That is, in a group in which half believed in God and in which half didn’t, what process would govern whether subsequent generations were more or less religious?
And here the social aspects of religion rear their head. It seems to me that when people embrace religious beliefs, with their associated “harm ye none” (in general)attitudes, their society or community is advanced due to the (generally sensible) guidelines given by their religious leaders. So now the religious half of the control group prosper due to their moral code, which just so happens to be independent of their belief in whatever precepts their religious holds.
To sum up the argument, the claim is that a group of religious people due to their community will tend to dominate. I feel I’m ignoring a host of factors here, such as the whole “meme” issue that religious thought propagates by affirming faith over reasoning — you can never really win a fundamental argument against such people. But I think it’s not too much of an exaggeration to say that if religion of itself weren’t beneficial it wouldn’t exist.
That’s kind of the end of the story. I told you it wasn’t interesting. (Isn’t that what the internet’s all about?) I could now say that it’s a problem that religion is in decline, because the unwashed masses are no longer living unconditionally by tenets that they otherwise would (lest they be smote). And in the 21st century, what inducement can be made for them to do so?
The radio station TripleJ in Australia does really great stuff. I don’t know how it compares to radio elsewhere in the world, but I like it a lot. They do a lot for the Australian music industry, such as support touring bands (how, I’m not actually sure…) and discover new artists (you might have heard of Missy Higgins). And provide free music to download, which is always nice.
(They’ve recently launched JTV, which runs mostly on ABC2, a digital-only free to air channel, which shows live recordings and other radio-type stuff that would be appropriate to actually watch. Check out the JTV website rather than hear me inadequately describe it, which even features “vodcasts” of some of their material.)
This year, JJJ’s repeating their “Impossible Music Festival”, in which they take votes for a whole bunch of live recordings they’ve taken, and then proceed to play only that live music over the course of an entire weekend. My highlight last year was Muse, who seem to be even better when left to their own devices — except on their official live album, Hullaballoo. I guess they’d get bored of playing the songs the same every time, or something.
In the hope of being able to listen to that recording again, I’ve voted. Included for your (perhaps lack of) interest, here’s my shortlist. Only ten were allowed, and JJJ doesn’t have a recording of the Pixies. In hindsight, I might have shuffled some choices around, but no matter.
As much as I want to like the iTunes Music Store, it continually disappoints me. It’s really easy to buy music, yes, and it’s often quite cheap. But in the Australian version, there’s so much weird crap that leaves me with a bad taste in my mouth.
For example, I checked out the new “The Dears” album today, and look what I found:
Uh, great. Two of the same thing, one half again as expensive. What’s the difference? The first is a regular iTMS album at the normal price:
The second has an EP stuck on the end (note the track numbering):
So, I could either buy the album at $17, and buy the tracks to the EP individually at $1.70 each for a grand total of $23.80. Or I could skip the hassle and buy them all at once for $27.04?!?!
What’s worse, their first album is only available for $33.98, which includes their “Protest” EP. Why even bundle them together in the first place? The US store doesn’t, and lists the album for US$10, the EP for US$4.00.
It’s senseless things like this that really makes me wonder how incompetent the Australian iTMS people are, or whether it’s some stupid music labels thing. Don’t even get me started on how Sigur Rós is represented in the iTMS. The whole thing’s ridiculous.
This post began as just a teeny tiny mention of how, with Apple supporting OpenDocument in its next release, TeX supporting unicode and OpenType fonts, and Microsoft Office using Open XML, things are going to get interesting in the document applications arena over the next few years.
But the comments thread in that last link got me going. There’s been beef apparently between the OpenDocument people and Microsoft, whereby the former believe that Office should have just used their solution and Microsoft didn’t want to fit their feature set into what ODF provided. Similarly, MathML wouldn’t have worked as a document format without extending it to support other Office features (track changes, e.g.), and that’s one of the main criticisms generally of how Microsoft “supports” standards (embrace, extend, …). David Carlisle (of LaTeX3 project fame, from my point of view) put things more eloquently in the thread under discussion.
The odd one out in all of this are the TeX-based solutions. While xmlTeX can parse, uh, XML, and ConTeXt can handle MathML, it becomes hard for me to see where or how TeX-based markup documents can co-exist with MathML apps (which will be able to copy/paste editable equations like plain text). You’re not going to want to insert some hideous MathML into your otherwise nicely marked-up TeX doc:
<math> <mo> ∑ </mo> <msupsub> <mi> x </mi> <mn> 3 </mn> <mi> i </mi> </msupsub> </math>
But it seems that isn’t so much a problem after all, with a quick search resulting in a highly relevant paper on [MathML to TeX conversion]. The problem, finally, is editor support. With tagged PDF allowing MathML copying from PDF documents (correct me if I’m wrong, but I believe this is possible), and editor-based MathML to TeX conversion into the source, TeX would work just as well as, say, Office 2007–Mathematica (to give but an example of MathML applications) as a first-class copy/paste citizen — a situation that’s not true today.
Sometimes feeds roll in articles that are surprisingly congruous. Over at Impulsive Highlights there’s a screenshot (which will surely be removed soon) of an upcoming preferences panel in a Leopard app for decompressing zip files.
Sometimes you want to keep the zip file after expanding it; sometimes you don’t. (A brief shoutout here to the old Stuffit archive format since I’ll never have the chance again. It was great back in the nineties when you’d get a progress meter saying things like “Unstuffing 34 files” — I still remember the comment about how on a PC it was a lot more boring to decompress files!)
Back to this preference panel; it offers an option called “Keep unarchiving dearchived file(s)”. Either it’s someone’s idea of a joke, filler text to be improved in the future (and I love it!); or it’s a classic example of why programmers shouldn’t write interfaces. Really, “unarchiving dearchived…”? So great.
At the same time as this nugget crosses my reading path, I also come across ‘The long road to simple: creating, debating, and iterating “Add an event”’. Here, the 37signals guys show how much work is required (or, at least how much work they do) on iterating a single text element in an interface to achieve the right mix of consise-ness and explainability. Wonderful stuff.
We’ve got heaps of lemons:
Somewhere along the line I got the idea that it’d be a pretty good way of getting rid of them by making lemon jam. Or lemon marmalade. Or lemon jelly.
Turns out you only need like six lemons to make enough jam for a couple of months. So the long term goal of actually using those lemons is still a little way off. (I’ve started squeezing them into ice cube trays, but I’m dubious if we’ll need kilos of the stuff.) But the process of making the jam, which I now highly recommend, was a bit of fun, and the recipes I found for it were a little confusing in their variety. I just sort of interpolated on the easiest ones.
I think the fact I chose the easier recipes makes what I made lemon jam, rather than marmalade, which seems to be more complicated. Don’t even talk to me about glass jars; I just use a recycled plastic honey container.
First step in the process was to pick the lemons and wash them. And try and remember that boiling the things for an hour would kill all the germs from bird poo. Gross. So, I started off with about a kilogram of lemons:
The first time I made lemon jam, I peeled them and cut the rind into little pieces. This time, it was much easier just using a grater. I chucked this in a huge pot with about 1.4 litres of water.
Since the white stuff on the outsides ("pith") of these lemons was so thick, I didn’t want to use any of it in the jam. (Last time, I added a little and it became a little bitter; add to taste, I suppose.) After taking off all the peel, I realised that I’d lost fully half of the mass of my lemons. So I put in two more just for luck.
Since the lemons are boiled for ages, it didn’t matter that I cut them into fairly coarse pieces to add to the pot.
Then it’s just a matter of waiting. This time ‘round, I boiled the lemons for an hour with the lid on, then took the pot off the heat. (I’m wary of burning the sugar in the next step after an unfortunate housemate incident.) At this stage, the whole house smells like lemon. Yum. But not too good to eat. Now we need sugar. I added about 1.25 kilograms of it, stirring it in as I went.
Finally, I boiled the mixture and left it simmering for about another hour. Apparently I was supposed to be skimming off the foam, but I didn’t. You can tell it’s done when a little jam left on a spoon to cool sets to a jammy consistency.
So, 8 lemons down, 92 to go. To summarise:
- Wash eight thick-skinned lemons
- Grate the rind
- Peel the lemons, discard most or all of the pith
- Cut the flesh into pieces
- Boil the rind and the lemon flesh for an hour in 1.4 litres of water, or thereabouts
- Slowly dissolve in 1–1.5 kg of sugar
- Boil for another hour
Okay, so there’s some slight good news on the interface front. While Preview.app has gone done the drain with those awful blue pill buttons, Time Machine is more toned down than shown in the keynote. Here’s the version shown publicly on apple.com:
And here’s the version shown on HardMac:
I’m pretty keen on the update, although I think the bottom panel can still be improved. Fingers crossed that I’ve gauged the direction of the user interface evolution correctly; it could well be that the ugly version is the more recent!
Anyway, say what you will about the appropriate-ness of the interface. Those moving stars in the background get my vote any day. Teehee.
Just a quick link to Paul Thurrott, who has the audacity to bitch about the newly announced features of Leopard. He’s got a couple of good points. For me, I find it’s the last 10% that makes the update worthwhile, rather than the big ticket demos.
But people in the tech community, i.e., those who should know better, really have to stop taking marketing so seriously. WWDC is covered by, like, mainstream media all over the world, so of course it’s going to be saturated with (a priori overblown and possibly propaganda) marketing. And of course Mac OS X is playing catch-up to Vista. That’s the whole nature of competition.
By now, it should be obvious to anyone interested that Vista’s going to be very feature rich, no doubt over and above Mac OS X (e.g., all the media centre and tablet PC software that Apple doesn’t, regrettably, have the hardware to support). Hell, Steve didn’t even announce anything about resolution independence. And those “top secret” features he doesn’t want to mention yet? Read “not ready yet even for public demonstration”. Here’s for a revamp of the interface (that the interfaces shown are so similar to Tiger’s this idea bears some consideration — and oh god! Mail still makes my eyes bleed).
I’d also like to see features based around resolution independence such as “online Exposé”, whereby background app windows are shrunk by a user-specified magnification factor (the inverse of Dock magnification, of sorts). Not to mention transparent support for high-res displays. I actually expected new displays today with resolution in the order of 200 dpi, but you can’t please everyone. I guess the LCDs just aren’t ready yet for any reasonable price.
Some details have been revealed for Mac OS X “Leopard”, which will be available circa the same time as Windows Vista. I couldn’t watch the keynote, either through network difficulties or plain congestion. But I’ve some comments on the new features, even if I’ve only got annoying voice-over man to explain them to me.
The most plainly useful, and innovative, feature of Leopard is its backup support through the “Time Machine” feature — see Apple’s explanatory video. John Siracusa discussed the need for this last November: (boy, where has the time gone?)
Her Mac has made creating and organizing digital content so easy that it now contains gigabytes of the stuff. I often find myself thinking ominously about the consequences of a catastrophic hard drive failure in her now almost three-year-old iMac. All those photos, all those movies, just…gone. Poof!
I’m similarly worried about my own computer at the moment, since I’ve a PowerBook with no good backup plan in place. John above proposed two hard drives in every Mac purely for the sake of data redundancy, and this is a stellar plan. “Time Machine” in Leopard is the software component that makes it even better, and is literally better than anything I imagined Apple (or anyone else) would implement. This is more than just backup.
It’s not clear to me how accessible the hooks here will allow such a thing, but imagine text editors being able to visually diff, in real time, content through revisions. That’s system wide changes-tracking, and better than what everyone harps on about in Microsoft Word. I’m now expecting this exact feature in iWork’07. More generally, will Time Machine make local version control systems redundant?
I’m with Jon Hicks though: deep space background with grotesquely distorted type (fine, “in perspective”) at the base of the screen? This could be so much more elegant while still retaining its wow factor. On the other hand, considering how infrequently — and in which contexts (oops, I overwrote a document I need; or, help! where’s my stuff?!?) — I expect the feature to be used as in the demo, I’m not even sure that such a whiz-bang interface is even appropriate! Hopefully there’ll be at least some minor visual tweaks before the final release. Don’t get me wrong, however, I think the basic premise behind the interface is fine. It’s just the useless 3D buttons and background I’m railing against here.
More comments on Leopard as time allows…
CNet (UK) has up a slightly tongue in cheek comparison review of a 1997 Apple Newton and a 2006 Samsung UMCP thing. Blah blah, the Newton wins. Cue “Wasn’t Apple great back in the day?” kind of response from all reading.
Actually, although I’ve never actually used a Newton (I did see one once…it had Daleks!), it does seem like it should have been the success story of the mid-90s Apple. Considering how far ahead of its time it was, with better marketing it could have been the iPod of its day. I presume. Pity Steve had to kill it when he returned, or whatever.
Anyway, back to this “review”. It’s a waste of time. Rather than actually looking at the two devices, they only compare the technology and potential of the devices. No actual investigation of what the two things do is shown. What’s the point in that?
Sure, I think it’s unbelievable that the Newton has 30 hours battery life and costs $50 on eBay, and is seemingly still fairly usable (read, connectable) today. But maybe its handwriting recognition is in fact 10 years behind, or maybe the Samsung UMCP really doesn’t have any software worth using on it to take advantage of the device. Maybe if the review was comprised of someone new to both platforms taking them around with them for a week and comparing those experiences, it’d be a worthwhile read.
But jeeze, comparing the glossiness of the plastics of their cases is much more interesting to me, right?
It would be exciting to see Apple take another strong interest in this market, though. Come on, 30 hours of battery life? That’s amazing, when the music players of today boast less than that. It would be a pity if all the people that did the great design of the Newton are long gone from Apple, though.
So, in summary: Newton good, wish it was newer. CNet review bad, wish they didn’t waste their time with images of people punching each other.
I was just going to link this in del.ici.ous, but I wanted to make a couple of comments on it. Someone has interviewed Michael Gartenberg on Microsoft’s Zune, and I actually listened to the podcast “Analyst Doubts Zune Will Include Gaming”. (Dumb title.) I reckon this Michael guy’s pretty clue-y.
First thing’s first: don’t you reckon the girl in the beginning of the podcast sounds exactly like the radio morning broadcast at the beginning of “The Journeyman Project”? (Okay, that’s a memory from circa 1994, so it might be a little hazy. P.S. I never got anywhere in that game. I was lame at age 13.)
Way to go off-topic. Secondly: Podcasts suck for listening to when trying to do other stuff at the same time. But who wants to listen to them tomorrow when I’m walking to work? Transcripts are so the way to go. Video blogging, on the other hand, might have something to it. Ha.
Oh, so what about that Gartenberg guy? Great points, to sum it up:
- Microsoft can only differentiate themselves by adding things like wireless and satellite radio.
- Users don’t care for such niche stuff.
- Steve Jobs is always one big step ahead.
Frankly, I’m appalled that Microsoft’s being so blatant about the whole thing (come on, competing against your partners? That’s so immoral), and Gruber, as usual, has it right on the nose.
How did I only find out about this in July? I used to be, shall we say, somewhat enthusiastic about Nicola Tesla in my undergraduate years. Such an amazing and tragic life. I just read on Wikipedia that somehow scientists have managed to confirm that he did in fact communicate with Mars in his later, more esoteric, years. (Unless I misinterpreted.) Can you imagine? Pity there was no-one listening.
Anyway, check it out: it’s Nicola’s 150th birthday this year, and UNESCO proclaimed 2006 to be the “Year of Nicola Tesla”. Yay.
I discovered this reading about Sun’s financial situation from their CEO Jonathan Swartz, who — funnily enough — writes on the internet. I’m really amazed by this guy, because he comes across as candid, when all of the other “blogs” I’ve read of such important people tend to sound like marketing spiels for their companies. Sure, he’s biased, but his enthusiasm for his company really shows though. Based on his word alone, I wish Sun all the best.
Well, I’ll leave this with a link to Tesla’s “The Problem of Increasing Human Energy”, a highly recommended read: (but I would say that, wouldn’t I?)
The scientific man does not aim at an immediate result. He does not expect that his advanced ideas will be readily taken up. His work is like that of the planter–for the future. His duty is to lay the foundation for those who are to come, and point the way.
Having come down with a terrible cold, I thought it appropriate to write up my recipe for a hot drink that may or may not be of benefit to treating such an ailment. Beats doing the dishes, which is next on my list of things to do while staying at home in an effort to recover.
The recipe’s simple. You’ll need, in order of importance:
- 1 lemon
- Chilli (liquid or fresh)
Omit garlic except when you want maximum recuperative properties, as it makes the drink a bit gross. The ginger’s not extra important, but I like it. I use about one thin slice of a hunk of ginger, cut into small pieces.
I’ve got this great chilli liquid (Dragon’s Blood) that I use for the chilli (one teaspoon’s worth, or to taste), but fresh cut chilli also works (one small, hot, chilli).
Preparation: Cut the lemon in half, squeeze both halves into a mug to get the juice. Seeds can be fished out if desired. Add the chilli, ginger, and garlic, using a tea strainer (like this one) for anything solid. Fill the mug with boiling water, and add generous amounts of honey to taste. Yum. The strainer can be re-used for at least one more mug of the drink.
Okay, I don’t drink this so much when I’m healthy, but it certainly beats any sort of cough medicine you might be prescribed.
There’ve been a couple of reports that Apple wants to start selling feature length movies on iTunes for (US) $9.99. And that Hollywood is resisting the idea to have a flat rate. Let's disregard the problem of how this whole thing is tied to iTunes, which should only be used for music.
(As a side note, it would be ideal if buying an online film through iTunes was independent of file quality, but I’m certain that won’t be the case. I’d expect 640 by 480 resolution at most due to file size problems, but I also expect they’ll want us to pay up again when the resolutions are eventually bumped up to HD. That’d be getting something for nothing, otherwise, and there’s no business sense in that, unfortunately.)
Having been a somewhat avid DVD buyer for a few years (I’ve since — mostly — given up, as the number of movies I really want to own doesn’t increase very often), I’m totally keen on the idea of a flat rate. As a consumer, it’s incredibly galling to buy a movie for, say $15 only to find it a couple of months later for $10. The feeling is that there’s no real cost involved with selling the product and that the price is set fairly arbitrarily.
The biggest attraction to owning a DVD is that the huge volume of cinema means that if you really want to see an obscure title, chances are fairly slim it’ll be in your local video store. You’re literally paying for the privilege of being able to find it again, sometimes. Supplementary attractions to DVD ownership are those interminable special features, which always sound so good at the time but more often than not provide little of interest.
We buy movies to watch them. But we also pay money to see movies at the cinema for a flat rate. Not even a different charge (with the small exception of gold class cinemas) for different sized screens or for comfier seats or for movies that have been playing for weeks. So sure, charge more online for extra special features or what have you. But if I just want to watch a film, it’s obnoxious or greedy or both to make the consumer pay more depending on when they want to buy. The new release blockbusters will make up the profits on volume, anyway.
This post centres around openness — what happens in 10 years when I want to access my stuff on my proprietry computer?
Let’s open it up with a quote that sums up where I stand. Concluding his followup article on Apple’s seeming non-openness of its x86 Darwin OS on which Mac OS X is built, Tom Yager said
The Mac platform is an overflowing basket of raw materials for innovators and creators of all stripes. It’s what Steve Jobs would fantasize about if he still worked out of his garage, and you can bet that he’d be livid to find that the vendor locked some portion of his chosen platform behind a gate without a word of notice or explanation.
Here’s the thing. I don’t think those would be good odds. “I could bet” SJ wants an open Apple? It’s the one question I’d love to know answered, but one I’m certain I’ll never know for sure. There’s no indication either way that Apple, and by personification, Steve Jobs, does care that much about locking up my data behind a gate, even if that gate doesn’t slam shut for another 10 years.
Mark Pilgrim started it all when he decided to switch away from Apple to Linux. John Gruber then wrote exhaustingly on the topic. Both make good points. Mark feels too locked in by Mac OS X (and Apple’s associated software), with the horrible thought that at some stage, his work or his life will be unable to be retrieved due to some poorly thought out software decision. The short of it: Apple only cares that software works now, not that what you produce with it will be useful in the far future.
John agrees that it’s a good point, going off for a while (justifiably, given his audience) on a big tangent how it’s a valid point to have. Data openness overrides any aesthetic considerations of Apple’s hardware or software. (And even then we all know Apple’s software can be pretty hideous at times.) John isn’t sure that Mark’s reaction is entirely the right decision:
…if Apple’s lack of openness were a disaster in the making, it (the disaster) would have occurred already.
But I disagree with this point. Only in the last half-dozen years has digital input and storage increased to such an extent that it’s not unreasonable to consider storing hundreds of thousands of photos on a computer in easily browsable form. Let alone self-edited home movies. The ease with which it is possible facilitates a much greater creative output than when we stored video on VHS and photos in shoeboxes.
People concerned with keeping their digitally written documents in a long-term storable form forsook Microsoft Word years ago, flocking to more stable formats such as SGML and LaTeX.
A larger (in my eyes) can of worms opens up when you consider digital music. This isn’t stuff you create yourself any more; now we’re talking about cash being transferred for media that you seemingly own. I’m scared to buy music from iTunes, for two reasons: it’s so god-damn easy, it’s addictive. But that’s a trivial reason. The second is more serious: what happens if all that lovely copy-protected music is rendered unreadable for one of a variety of reasons?
(The same argument holds for DVDs, one of the reasons I’ve stopped buying them. What the hell do I do with a whole collection of movies that will only play in Australian DVD players if I move to another country?)
And I’m talking about legal options here. If iTunes DRM is easy to crack now, then adjust my argument for something more hole-proof, such as Windows Media.
These issues literally make me scared of using computers, because it’s conceivable — to me — that I might wind up in a spiral of doom in which I end up with the choice of either abandoning my stuff or using the exact same system (same hardware, same software) in perpetuity. This later case clearly isn’t tenable.
So, what to do? The solution for music is to buy MP3s from emusic.com, or buy unprotected CDs from which to rip. This situation might not last forever, but I hope it does. Larger problems loom for movies.
The solution for software isn’t so clear. Software companies, in the ideal case, would acknowledge that they’re not omnipotent and won’t be around forever; the formats they use to store their information should be totally transparent. In the short term, that requires careful choice of software to use: I guess, then, that it’d be advisable to ditch iLife, et al., until Apple as a company starts supporting these issues. But then you lose iTunes, and that means trouble getting an iPod to work…so for now I just suck it up.
In the end, you’ve just got to have enough faith that someone, somewhere, sometime will code up an exporter for any data that’s been irrevocably locked away in some terrible future.
Quarter Life Crisis: “In the iBooks the FireWire port was the wrong way round. Not only did it feel unnatural to insert the FireWire cable with its ‘edgy’ side pointing towards rather than away from you. When using the iPod cable, you’d always see the Apple logo rather than the FireWire logo on the plug. Which means the orientation was just the opposite from what it usually is. And this has been fixed in the MacBook. “
I reported this as a bug years ago and received a “behaves correctly” reply. Good to see that someone was listening
I’m afraid I’ve been a little busy in the last couple of weeks trying to make more time for myself. Funny, huh? My scheme was to wake up at 7am every morning and get to uni at 8am (or close to). Eight or nine hours “working” and then I’d be able to go to the gym or go running at five, leaving the late evening for cooking, cleaning, reading, writing, typesetting, or whatever.
Well, did it work? A couple of weeks went pretty well, but this week messed up somehow. Instead of leaving home at 7:30 after a quick shower and eats, I, well, didn’t. And I’m not sure exactly why, but suddenly it’s the end of the week and I’m going to have to start all over next week.
The advantage of the current arrangement is that I’ve been leaving uni at fairly consistent times, so if I don’t get in early enough, I simply don’t do much work that day. Which is bad, obviously, but it means that I’m aware of my transgression directly, which allows me to act on what caused the problem.
If my life is a dynamic, periodic system, disturbances cause error, and this error is perceived by me as irregularities in my schedule. I can compensate for this fairly easily if I’m aware of them, and so I’m hoping that a strict leaving work time will force the feedback loop I’m living in (acting on my awareness of my schedule is the feedback) to be stable. That is, errors such as spending too much time doing not work will be cancelled out because I realise them.
It’s all too easy to fall into the trap of variable sleep cycles (we don’t naturally have a 24 hour sleeping cycle — monophasically — unless at least four hours are spent asleep at the same time each night; see Stampi’s book “Why We Nap”, somewhere) and I find that I will generally be happy to go with the flow, since I’ve no strict times every day at which I need to work.
I don’t know if what I’ve written makes any sense, but I’m supposed to be writing regularly to practise fleshing out thoughts at short notice coherently. I’d like to expound on this topic in more detail in the future. I’m particularly intrigued by our repetitive behaviour (or perhaps more disorganised schedules) and the analogy that can be made with control systems. I do feel like constant feedback with the brain (revisiting yesterday’s writings, yesterday’s work, …) helps me, at least, be more directed in what I’m trying to do. I’d like to say “self-aware”, but that might sound too pretentious. That’s the sort of term I mean, but on a more subconscious level.
Ah, it all just rambling.
I came across a friend’s business card from PricewaterhouseCoopers and was reminded how ugly their logo is. Take a look: (sorry, PwC, for the direct link)
I brought this up in
comp.text.tex, since the excessive scaling and kerning (vertical & horizontal) reminded me of the LaTeX logo (and others) that can be “a bit much” in running text:
The fact that it’s easy to obtain this logo within LaTeX, by typing
\LaTeX certainly doesn’t discourage such usage. Don’t get me started on (attempted) replications of the logo in HTML. Bleah.
Suffice it to say that I find the PwC logo more visually offensive, even as a logo, than LaTeX’s. Anyway, an anonymous poster reminded me that such practise isn’t so unprecedented:
There’s a well-established tradition of doing this in modern calligraphy where the visual effect at least as important (and often more important) than the words themselves. It has precedents in medieval manuscript and classical stone carving.
And this reminded me that I took a photo of exactly this, on a trip to Paris a couple of years ago: (somewhere in Les Invalides, if I recall — and spell! – the name of the place correctly)
But this makes the distinction only more clear in my eyes. You could get away with something like this with a specially designed font (or a broad-nib pen!), but just linearly scaling characters of Optima and moving the letters around seemingly at random makes for a distinctly unpleasant look. The capital ‘P’ and ‘C’ both look too black because they’ve been scaled too much. Even the kerning looks wrong, (between, say “Pr” and “eW”) and that’s intentional!
I wonder who did the design…
So, after my original exposition of my interface re-arrangement of Cocoalicious, I made some more changes to get rid of more of the borders. I prefer this version, except for the bottom-aligned search box (that’s just totally the wrong place for a filter-style input):
Imagine my surprise to find that Jon Hicks has performed a similar treatment! He’s been much more useful in actually providing an installer so other people can use his interface. Whereas I figured my changes were easy enough to make on your own.
I prefer not to use Cocoalicious’ integrated web browser, and you can see this distinction between the way I present my window and Jon does his. I removed the heading of the “Tags” pane because it looked metal, which didn’t work in aqua; and like I said originally, I found the buttons unnecessary so they’re out. Most importantly, I made the scroll bars mini, because really, everyone’s using some sort of scroll device by now (whether wheel or two-finger drag or the highly recommended SideTrack).
Scroll bars are so 1990s, and I can’t wait until they’re replaced by something better. Picassa, I suppose, is a step in the right direction. (Actually, I don’t mind scroll bars that much — but I do feel that they take up a pretty large amount of space considering I, at least, never use them any more.)
In conclusion, this is more evidence supporting the hundred monkey theory.
There have certainly been a lot of things said about the MacBook over the last few days. A trip to macsurfer.com can give you an overly comprehensive list. Ars Technica, as usual, has pretty much the best review around (best as in covering details, but they do give it a 9/10).
I’m reassured to read in a number of places that the weird keyboard does measure up; it is weird, but it is as good, or even better, than the more conventional keyboard it replaces. I’ll be interested to see if they can cram backlights into it for the MacBook Pro models.
Here’s something a little interesting:
Touch pad: Yes, there’s still only one mouse button. Apple seems to be overcompensating for it, too, because the touch pad is simply huge—about the size of a Treo—and considerably bigger than the 15-inch MacBook Pro’s. [CNet]
“About the size of a Treo”. Wait a gosh-darn-tootin’ second. What’s stopping Apple from including a stylus for more precise input? That would just be excellent for faux-tablet functionality. And what might be next? Why, a little screen instead of the faceless trackpad.
In all reality, Apple probably just realised that larger trackpads are better, full stop, but sometimes it’s nice to dream. I do think the stylus-on-trackpad thing might work in principle, but I don’t even know if it would be possible to get a stylus to work on a trackpad, given that fingers do but inert objects don’t seem to have any effect.
Here’s an odd point:
Interestingly, when you point the built-in iSight camera at you so that your face is centered in the capture window, you will be outside the range of the optimal viewing angle [Creative Mac].
I wonder if the iSight is just a gimmick “because we can”. I’m guessing that Apple’s foreseeing pretty big things in the future for video conferencing, and they’re just anticipating everyone using iChat or video Skype. All in good time.
Finally, I wonder what it means for Apple’s product matrix to have just, essentially, three notebook computers? I love the super simplification that they’ve done on the product line (although Toni prefers the idea of an aluminium 13 incher: “the black one looks like a regular ‘puter”) but now there’s more space for something a little smaller.
Everyone’s clamoured for so long about updated Newtons, sub-notebooks, Microsoft-style UMPCs, tablets, and what have you that there’s no point discussing the options further. Except to say that I hope now that there’s a bit of space available for Apple to finally release something that will truly redefine small-scale computing. Put it this way: with the MacBook, we’ve got performance well in excess of what came before with the PowerPC iBook and PowerBooks. Don’t you think we would be able to deal with a scaled down processor that could be shoe-horned behind a tiny 8-inch touchscreen? (For example.)
Update: also, I have no problem with the name "MacBook". I haven't even seen people deriding the MacBook Pro name now; both models can be called the same thing ("MacBook"), because there's no overlap.
Over at Microsoft’s fontblog, they discuss linewidth and linespread with regard to the reading speed of a whole damn bunch of volunteers. They cite Miles A. Tinker’s book, Legibility of Print, with some actual quantitative analysis (gasp!) on the quality of various layouts.
I haven’t seen these sorts of numbers before, and I’m happy to see that they reflect what everyone (who knows about this sort of thing) believes: longer lines of text are harder to read, and the distance between lines of text is also important for reading quality. These are the results, which are based upon statistical data, to show people when they want to set 1cm margins on their Word document in order to save paper.
They also say:
In the comments for Typography Tip #3, Adam Twardoch asserts that the line length effects the amount of needed linespacing. Tinker’s data does not back up this assertion. This table shows that 2 points of linespacing performed the best at each line width tested.
Okay, sure, but the numbers also show that for such long line lengths, the reading speed is already shot to hell. No-one is actually suggesting that you can get away with 200mm text blocks if you have a 10pt/20pt typeface, say. (x/y refers to a font size of x and a distance between successive baselines of text of y.) No amount of leading is going to fix that mess.
Going back to that numbers business. It’s great that they’ve nicely tabulated that data for us. But numbers in a table don’t really help to get a good feel of exactly why a 43 pica line with 4pt of extra leading is actually a really bad idea. So I put together a LaTeX document illustrating the 20 different layouts examined; grab it here. The source is also available, in case you’re interested.
On the first page, I’ve put the top five layouts ranked by normalised reading speed. Subsequently, each page is dedicated to a single line width with varying linespreads. The text is chosen arbitrarily from the Edgar Allen Poe story ‘Never bet the devil your head’. Times was selected as the typeface because everyone’s used to it and it shows up bad typography more readily than something a little nicer (in my opinion). I find that this document makes it much easier to get an actual understanding of the results of the cited study.
Found a link to the new WebnoteHappy application in my feeds this morning. It looks quite nice, but I’m not going to bite.
So what does this app do? You get browser-independent bookmarking, del.icio.us integration, and a more complex organisation scheme than either Cocoalicious or Yojimbo. It’s certainly worth a look, although I feel US$25 is slightly on the high side for an app of this type. My real hope is that OmniWeb integrates a nice bookmark browser like this, but to be perfectly honest, I’m pretty happy with Cocoalicious and Omniweb at the moment. I’m simply not organised enough to file bookmarks as carefully as this app requires.
The one thing I miss with del.icio.us is the limited description you’re allowed. If I could be bothered, though, I’d file more rigourously in Omniweb for this functionality.
Great name, though, and I hope the release is successful. There’s a lot of room in the software space of keeping a store of information based on web browsing, I feel.
Well, here it is. An I’ve got to say, I’m glad Apple played it like they did. This update has put life back into the “iBook” platform.
Some nice points:
- The new models are thinner — close enough to as thin as the old PowerBooks).
- They have a better screen size with 1280 by 800 pixels; I can’t stress how big of a deal this is, as an owner of a 12 inch PowerBook.
- Built-in iSight is a nice gimmick…but not very useful to me, at least. I suspect there aren’t that many people yet doing video iChat.
- Gigabit ethernet is a nice addition, but again, not all that useful in practice.
- Optical audio in and out.
- Don't forget MagSafe adapter and sudden tilt sensor!
- Weird, claimed better, keyboard (see later).
- Front Row
- Finally, no silly 14 inch size.
Some possible downsides:
- Glossy screen, as mentioned by Sven S Porst.
- Integrated video, just like the Mac mini. I guess it’s better than my PowerBook, anyway.
I’ve wondered about those glossy screens before, and I can’t see how they are an advantage. Surely the greatly reflectivity, um, increases the reflections you see in the screen? Update: it looks like the glossy screens provide better contrast and colour with the detriment of added glare. I suspect they are using these screens because everyone else is, and that means they're cheaper.
So, I mentioned that weird keyboard. Apple says “MacBook features a unique new keyboard design that sits flush against the bed for a sleeker, lower profile. Plus, you’ll find a firmer touch when typing. That ought to make your fingers happy” (there’s that whole “Apple product as a proper noun thing” again; I think it was John Gruber who commented on it once before). Well, sounds good, I suppose. Take a look at the shape of the keys:
Not square! Wait, look again. They look square to me. Ohhhhh…don’t let Apple fool you. That image has been scaled horizontally on their site, but as shown above the keyboard has actually a proper proportion. In that case, damn, that keyboard looks hot. Pity about that damned Enter key, though.
All in all, I’m especially positive about this release. This Apple laptop is compact, better looking than ever, and so much faster than my 867MHz ‘book that I’m quite jealous I don’t have one. All good things to those who wait, though.
When I first came across Cocoalicious, I didn’t really get it. I didn’t really get del.icio.us, either. These things can take time, I suppose. If you don’t know what I’m talking about, check out that link on the right “del.icio.us list links”, which takes you to everything that I’ve been reading that I find interesting. But didn’t see fit to comment on here in more detail.
Cocoalicious is a Mac OS X client for del.icio.us, which allows nice things like hitting a keyboard shortcut while browsing to upload a link in one easy step, as well as providing a nice browser for all the links with their tags. Did I mention tags? I like metadata.
Anyway, I was bored and decided to play around with the interface a little, since, as it’s a Cocoa app with an easily editable interface, it was easy to do just that. Here’s the original: (click for full size)
I guess my biggest beef was the brushed metal. Too much wasted space around the edges. And truth be told, given the small amount of time spent doing the revamp, there’s more stuff that could be removed. But overall I’m pretty pleased with the result:
After renouncing the use of my PowerBook at work for efficiency reasons, I’m now in the tricky situation of having to synchronise stuff back and forth between my Mac at home and my Windows box at uni.
Generally, this isn’t a problem. I really liked being able to rsync my files back and forth when I was using Linux, which meant that I could do my writing on my PowerBook and my work on the desktop. But really, I can probably deal with using TeX on Windows; the only thing I’ll miss is BibDesk, I think.
(By the way, my switch back to Windows from Linux was due to the fact that engineering software — Matlab, specifically — just works so much better under Windows and I was sick of having to deal with Linux’s idiosyncracies. At least I know what I’ve got under Windows; even if that is an inelegant mess, it at least works for the things I need it for.)
Perhaps the biggest concern I had was a calendar platform I could use both at uni and at home, as well as allowing iSync to sync everything up with my phone. And I’ve decided it’s too hard to be worth doing. Which is a little sad, but now that I think of it, it’s not the worst thing in the world to use my phone when I’m away from my PowerBook at home.
But while trying to fix this problem, I very briefly came into contact with a whole bunch of online Web 2.0 calendar apps. A totally unrepresentative list of possible choices:
Most of these were obtained from the comprehensive TechCrunch.
I used none of them long enough to be able to actually review them in any decent capacity. But that won’t stop me from passing judgement. Each does more-or-less the same thing. It’s possible to export your data in various ways, in iCal or RSS formats, amongst others — including being able to embed mini calendars within your own web pages. It’s interesting to see the different spins each company takes, with some more successful than others at creating an experience that's interesting to use.
Of those mentioned above, I’d rank them in the order given. The first two have great, and simple, interfaces. The experience is all quite nice, and if I was the kind of person who liked web apps, I’d have no problem recommending them. (Uploading my phone calendar, however, is still a problem.) The most innovative element of both of them is free-form entry input. No more clumsy clicking. Just nice typing, e.g., “party 10pm friday at justin’s” in a text field.
The big guns, the Yahoo and Google calendars, offer exactly what you’d expect. The sort of “no design” that people like to discuss at the moment, and integration with their other services that lowers the cost of entry.
From there, Planzo and CalendarHub were respectively clumsy and broken. They just don’t have the “sparkle” to make them attractive, from more complex sign-up schemes in the former (including HTML email) to simply broken functionality in the latter (beta, I don’t care — if they’re public, they should work). The interfaces to both were cluttered and not pleasant to nest in.
But the real lesson I learned was that web apps are really at the cutting edge of interface design. I suspect this is due to the startup culture allowing designers to work really closely with the final product, in combination with the fact that the line between web app developer and designer are blurred so much that the developers actually have a clue about decent design.
It’s just a pity that the calendar in my phone has no way of getting inside one of these nice online ones. Or if there is, it’s just too hard at the moment.
Postscript: another option I looked at was using iCalX to share webdav calendars between SunBird and iCal, which has the potential for working nicely but which never quite clicked. Actually, I’m not sure why this didn’t solve all my problems so I might have to revisit it again soon.