Category Archives: Windows

Windows 7 netbooks in NSW schools?

I was asked by a journalist to comment on the NSW government decision to distribute Windows 7 “mini notebooks” across schools. Here’s my reply:

I used to work with satellite networks, providing Internet access to
most of NSW before wired broadband was widely available (and it still
isn’t in a lot of places). We had many rural schools and local
councils as customers. The difficulties of getting computing and
Internet resources to remote areas (with associated infrastructure,
training, etc.) cannot be underestimated.

Firstly examining from a business perspective, how is this to be
funded, given that NSW is in a poor financial state and the government
has been axing projects left, right and centre? What alternatives were
considered? How were they evaluated? Was there an open tendering
process?

What matters most is what we can achieve with this programme. Simply
throwing a computer to every student won’t cut it. There needs to be a
clear plan and set of outcomes defined, as you would have with any
reasonable business arrangement. This press release doesn’t touch upon
any of that.

What is the opportunity cost of funding this scheme? Could the
resources have been spent on better facilities for the children or
better teachers’ salaries?

The phrase ‘new era’ implies some sort of major change. Has this been
adequately planned for?

Teachers have a hard enough time keeping up with technology. Will they
be given training and continued assistance?

How will these devices be integrated into curricula? How can they
become effective teaching aids and not just expensive appendages?

Will the focus be on teaching or training? I am a firm believer that
schools should teach children to be clever and think for themselves,
creating the basis for a flexible workforce. They should not simply be
trained to memorise the functions of a particular version of a piece
of software. Rote-learning like that will be worthless when they
graduate and enter the workforce.

Will there be any additional costs required to properly use the
equipment? Are classrooms adequately equipped with appropriate
electrical wiring and capacity to charge all of these? What about
network connectivity? What will it take to maintain the infrastructure
required for these, including hardware and software for servers,
routers and so on.

In fact, there is no mention of supporting infrastructure at all. What
are the costs of the entire life cycle of these devices, the software,
maintenance, infrastructure and so on?

Who will own the notebooks? Will students be free to explore and learn
about their computers, or will they be locked down? Can they install
whatever software they want? Will they be tied to particular
applications and file formats?

There is no mention at all of what software will be installed on these
computers. An operating system without applications is useless. Will
the included software be enough to empower and teach our children?
Have deals been struck with other software suppliers? Will there be
additional costs to acquire the software for particular subjects? Who
bears this cost – the school system or parents?

Has open source software been considered at all? There’s plenty of
open source software that works happily on top of Windows. Microsoft
may have discounted Windows, but did they include an office suite?
OpenOffice would do the job just fine.

Even if you believe the tired-old argument that the state MUST
purchase Microsoft Office for each and every student (which works out
to tens of millions of dollars), wouldn’t it be better to choose
OpenOffice for free, and spend those millions on new library books or
hospital beds?

I’ll admit that OpenOffice isn’t exactly the same thing (it’s better
in some ways, not as good in others), but it’s so similar that it
doesn’t really make a difference. It is worth tens of millions of
dollars just to get the Real Thing? Does learning MS Office 2003 in
school really prepare you for using Office 2007 (with its completely
new interface) once you hit the workforce? Refer to my earlier
comments about teaching versus training.

Are they including graphics software for the art and design classes?
Are taxpayers going to have to pay for a copy of Adobe Creative Suite
for everyone? How about we save the hundreds of dollars per student
and use the GIMP and Inkscape instead? Examples such as these abound,
and there are plenty of other open source applications that simply
have no good parallel in the proprietary world.

I find it strange that the country’s largest state would tie the
education of its children to a totally unproven operating system. A
smart purchaser – especially one purchasing at such a grand scale –
would wait until the software had been out for a while and had been
thoroughly tested by consumers around the world. Internal testing is
one thing, but you cannot beat real-world experience.

A point-zero release is sure to have rough edges, and it would have
been far wiser to wait for at least the first service pack like most
organisations do. Can you imagine the fury that would have been
unleashed if the NSW Government had decided to kit out the state with
Windows Vista before its release? Sure it sounded good before it came
out (“The wow starts now!”), but it lost its lustre very soon after
unveiling. Many people today still cling onto Windows XP, and others
have switched to Linux and Mac OS X, in response to Vista’s abysmal
state.

The OLPC Project has already identified and addressed many of the
issues that may be faced. They have done this through developing a
combination of hardware, software, infrastructure, training,
procedures and learning material. It would be wise to learn from their
experiences.

The whole mini notebook revolution started with Linux. Starting with
the OLPC XO laptop, Linux has proven to be a flexible and capable
operating system suitable for small devices. Its resistance to viruses
and other network nasties is legendary. The last thing I’d want is for
my child’s computer to get infected and start showing kiddie porn.
Anti-virus and anti-malware software are band-aid solutions. I’m not
going to build a castle on a swamp.

Commercially, devices like the Asus Eee PC could not have existed if
it were not for Linux. It forced Microsoft to actually compete for
once, by resurrecting Windows XP and slashing its price to a more
reasonable level.

The press release claims that this scheme is ‘unparalleled in
education globally’. There is considerable risk in being first off the
block. I’ve already explained the risks of using an unproven operating
system. It would be more prudent to learn from other large scale
rollouts in education.

Take the Republic of Macedonia, for example. Despite being one of the
poorest nations in Europe, they are the only nation to have one
computer per student. They achieved this through the use of Edubuntu,
a variant of the popular Ubuntu GNU/Linux operating system that is
specially tailored for education and learning. With that, they got a
vast library of open source educational software, which was all
translated into their native language.

Similar stories abound in places like Brazil, Russia, India and China.
Collectively known as the BRIC countries, they are considered to be
the up-and-coming nations to watch over the next few decades. Their
economies have been growing at breakneck rates, partly because they
have been clever in their investments. These nation states recognise
that education is the key to long-term economic success.

You might say that these countries are poor and that is why they are
choosing to use open source software. It is true that they don’t have
plenty of money to throw around, but does New South Wales? Does
Australia? Where would you want your tax dollars spent?

Annoying by design

Microsoft claim that their UAC security prompts in Vista are designed to annoy you. I’m trying hard to take them seriously and to not laugh them off… but did they really think it’d work? OEMs and users have been disabling it in droves. Other users have probably taught their muscle memory to automatically click the Continue/Allow button without the slightest acknowledgement or thought. I think Microsoft need to get their act together when it comes to UIs. Some of their recent efforts have been frustratingly inconsistent.

A major reason given by Microsoft in their UAC scandal was to encourage developers to avoid privilege elevations as much as possible. A noble cause, especially in the security-inexperienced world of Windows development, albeit poorly executed. It reminds me of Apple’s perpetual opposition to the multi-button mouse. One stated reason is to enforce more ‘sane’, ‘usable’ and consistent UI design, and overall I think they’ve done well. They don’t ban multi-button mice (‘XY-PIDSes‘?), but given the simple one-button default there’s less need for them. I might prefer using a conventional 3-button scroll mouse, or even Apple’s own Mighty Mouse (a cleverly-disguised multi-button mouse), but I don’t lose any functionality by not using them.

It goes to show how much the graphical interface can be influenced by its physical input, something a lot of us don’t acknowledge in today’s world of >100-key QWERTY keyboards, multi-button mice and multi-finger touchpads. The real innovation in that space seems to be happening in the mobile and embedded sector, the iPhone being a good example. Players of games on both desktop computers and games consoles might notice the difference in ‘look and feel’ between games designed for keyboard/mouse versus control pad. Particularly for action and strategy games, ports from desktop to console (or vice versa) often aren’t successful. The software was designed with the assumption of particular input devices, and anything that deviates from this will also alter the feel of the game.

LotD: Your Windows licence fees paid to make this

What if… Windows went open source?

Sam Varghese over at iTWire asked me a couple of days ago for input on whether FOSS would be affected if the Windows source code was released. I started drafting a response, expecting to be finished quickly, but the ideas just kept flowing. The end result was a touch over a thousand words! I was expecting Sam to maybe quote a token sentence or two in his article. To my surprise, he basically reproduced (with a little paraphrasing) the whole thing! 🙂

The article is here. Skip to page 4 to start reading my contribution.

Here is my complete response to Sam. As you can see, very little was left out of the article.

The impact on FOSS would depend on what circumstances the code was released under. Windows code is already available under Microsoft’s ‘shared source’ programme. In this state, you must sign a restrictive NDA to see the code, and after that your mind is forever tainted with Microsoft’s intellectual property. Write anything even remotely similar to the code you were deigned to see, and you leave yourself open to litigation. In other words, taking part in shared source is a sure-fire way to torpedo your career in software.

Microsoft have for years been experimenting to find a licence that they can convince people is ‘free enough’. Fortunately they haven’t succeeded. The danger if they did would be to shift the balance in the open source world away from free software and towards a model that is more restrictive but still accepted. They have enough code to seriously upset the balance, ignoring for the moment the complexity (which includes also legacy cruft, bloat and so on) and hence difficulty for anyone to actually comprehend the code and participate in development.

Quality (or rather, lack of quality) aside, Microsoft’s code could be useful to see how formats and protocols are implemented. Linus Torvalds once wrote, “A ‘spec’ is close to useless. I have _never_ seen a spec that was both big enough to be useful _and_ accurate. And I have seen _lots_ of total crap work that was based on specs. It’s _the_ single worst way to write software, because it by definition means that the software was written to match theory, not reality.” It’s one thing to have documentation (as the Samba team have recently managed to acquire), but there’s nothing to guarantee that there are no mistakes or deviations (intentional or otherwise) in the actual implementation. The WINE project is a classic example – consigned to faithfully reimplement all of Microsoft’s bugs, even if they run counter to documents you might find on MSDN.

There are many ‘open source’ licences. Too many, in fact. Many of these are incompatible with each other, and a ludicrous volume of them are just MPL with ‘Mozilla’ replaced with $company. What keeps open source strong are the licences that either have clout in their own right or ones which can share code with those licences. The GPL is right at the centre of this, and we should be proud that the core of open source’s superiority is Free Software. Microsoft could try and release code that meets the Free Software Definition but is intentionally incompatible with the GPL, as Sun did with OpenSolaris and CDDL. It still remains to be seen if OpenSolaris is of any success, and I think GPL incompatibility is certainly a factor there (for example, they can’t take drivers from Linux, so its hardware support remains poor). OpenOffice.org, on the other hand, is a prime example of a large proprietary project that has been released under a GPL-compatible licence (LGPL) and has gone on to be successful as a consequence. That success would not have happened if code could not be shared with other FOSS projects, integration could not be made (direct linking, etc.) and mindshare not won (FOSS advocates to write code, report bugs, evangelise, etc.).

The big stinger here is patents. Sun have addressed this in the past with a strong patent covenant, and more recently they’ve been trying to do it properly by for instance relicensing OpenOffice.org as LGPLv3 (hence granting its users the inherent patent protections of that licence). Would a mere ‘Covenant Not to Sue’ suffice for Microsoft? In the case of Microsoft’s recent releases of binary Office formats documentation, their covenant only covers non-commercial derivations. Similarly, their Singularity Research Development Kit was released a few weeks ago under a ‘Non-Commercial Academic Use Only’ licence.

It is be vital that companies have as full rights to use the code as non-commercial groups. Otherwise, the code would be deemed to be non-Free (Free Software doesn’t permit such discrimination). The contributions made by commercial entities into the FOSS realm is immense and cannot be ignored. To deny them access would be a death sentence for your code. Microsoft would be stuck improving it on their own, and in that case what was the point in releasing it in the first place? Don’t malware writers have enough of an advantage?

Don’t trust what a single company says on its own. Novell was for a short while the darling of the FOSS world… then they made a deal with Microsoft. I’m glad that many of us were sceptical of Mono back before the Novell-MS deal, because I’m sure as hell ain’t touching it now. .NET might be an ECMA ‘standard’, but like OOXML it is a ‘standard’ controlled wholly by Microsoft. Will such a standard remain competitive and open? We’ve seen this in other standards debates, a good example being the development of WiFi. Companies jostled to get their own technologies into the official standard. The end result might indeed be open, but if it’s your technology in there you already have the initiative over everyone else. If Windows is accepted as being open source, Microsoft will continue to dominate by virtue of controlling and having unparalleled expertise in the underlying platform.

To raise the most basic (and in this case, flawed) argument, free software is fantastic for all users no matter what. Free (not just ‘open’) Windows means that Free Software has finally achieved global domination – a Free World, if you will. By this argument, we should simply rejoice in our liberation from proprietary software and restrictive formats/protocols.

Of course, I have already demonstrated that this cornucopia likely will not eventuate even if Microsoft released the Windows source code as open source (even GPL). The software on top will remain proprietary (the GPL’s ‘viral’ nature aside). We’ll still have proprietary protocols and formats – and even digital restrictions management (DRM) – at the application level. In the grand scheme of things, the end consequence on FOSS of Windows source code being released might possibly be zilch.

LotD: Happy Pi Day everyone!

Dancing with the Devil in the pale moonlight

Last night, SLUG’s monthly meeting played host to four representatives from Microsoft:

  • Sarah Bond, Platform Strategy Manager. Sarah was present to talk about Microsoft’s current position with OOXML, especially with regards to the interoperability with Linux.

  • Amit Pawer, National Technology Specialist. He specialises in Windows Server technologies.

  • Alistair Speirs, Technology Specialist – Office. His background is in Java and .NET development.

  • Rosemary Stark, Product Manager, Windows Server and Infrastructure Products.

This unsurprisingly caused much consternation and controversy within the Australian FOSS community in the weeks leading up to the event, and I (being its organiser, and hence the target of much vitriol) ended up spending much time gauging and responding to the opinions and ideas raised.

We wanted this to be an open community-led Q&A session, and to their credit Microsoft were obliging. Admittedly, I would have saved much sanity and hours of work if people had posted to the wiki as asked, but having to transcribe from the mailing lists to the wiki allowed me to think more about the questions and how they should be worded and ordered. I need no reminder of Microsoft’s transgressions, but I made sure to keep IBM in mind (as a company that was once considered an anathema to software freedom but has now largely reformed) and take an optimistic approach.

Pia was of great help here (as always!). With so many questions and only an hour and a half in which to ask them, we decided to cull the non-constructive, accusative and just plain trolling questions. By the end, Pia had compiled a list that was fairly encompassing of the major issues concerning supporters of competition, technology and freedom.

As I arrived at the venue, I found that our guests had beaten me and were actively helping to get the furniture into place. This allowed us to get better acquainted before the meeting. It was clear (and they openly admitted) that they had been following our open discussion process on mailing lists and the SLUG wiki. Really, they would have been daft not to do so 🙂

I handled the introduction, then turning the microphone over to our guests to introduce themselves. Sarah Bond launched into a presentation on OOXML, in the process answering several of the questions we had on the wiki. I left Pia to officiate most of the meeting, but I chimed in on occasion with both pointed and irreverent questions and comments that were not on the list.

We will be releasing the video of the meeting as soon as we are able, so I shan’t explain its contents too much. Some interesting points though:

  • In the list of rules for the meeting, I put ‘Asking “Why do you eat babies?” doesn’t help anyone.‘ I initially felt bad when I met Sarah and realised that she is pregnant! She was a good sport about it though, and we all had a good laugh 🙂
  • In her presentation, Sarah mentioned that Microsoft will be releasing the specs to their binary Office file formats in mid-February (UPDATE: it’s confirmed!). I’m still not sure if I heard this one right (it’s a lot to swallow!), so if someone can confirm this I’d appreciate it. They made no bones about this being part of their drive to promote OOXML acceptance.
  • Not new, but news to us, is the fact that Windows 2003 has a DRM infrastructure which they call RMS, short for Rights Management Services. I did cheekily ask them if the name was deliberate, and their attempts to seriously and politely address the question was priceless 🙂

Like with any other SLUG meeting, we went out for Chinese food afterwards. Three of our guests joined us (it’s a shame that Sarah couldn’t come, but being pregnant isn’t easy). Did we have dinner with the Devil? It certainly didn’t feel that way. Once we put our differences aside, we realised that we have an awful lot in common. We are all geeks at heart, and some of the MS people have and continue to dabble in Unix and FOSS technologies such as Python.

Were we successful? It depends on how you look at it. From my perspective of trying to build trust and understanding, without dwelling too much on (but certainly not ignoring) the past, I think so. Asking loaded questions and making our guests feel uncomfortable might have brought some short-term satisfaction to some of us, but would it have achieved anything? There were some inappropriate comments from the audience going in both directions (one of the loudest people actually seemed to be pro-Microsoft), but those people were easily outnumbered by the more sensible majority. My original fears of the crowd devolving into a senseless rabble dissipated rapidly, and I am very pleased and proud of our community for that.

I was initially disappointed by our turn out, but that feeling changed as the meeting progressed. Due to it being January, linux.conf.au being just around the corner (which siphoned a lot of our best and brightest) and the sensitive nature of the subject matter, we had a crowd that was smaller than expected, but felt more conversational and manageable.

If you were at the meeting, please let me know what you thought of it by posting a comment.

Sarah will be speaking again at LUV on February 5. If you’re in Melbourne for linux.conf.au, it might be worth extending your trip by a few days to see it. I would also suggest that you take inspiration from the list of questions that we have compiled. If our video is out by then, watch it to avoid repeating the questions that we’ve already asked (or pose follow-up questions).

My warmest thanks go to:

  • the rest of the SLUG Committee (Lindsay Holmwood, Silvia Pfeiffer, Matt Moor, Ken Wilson, John Ferlito and James Dumay), for their support throughout
  • Pia Waugh
  • Anna, Matt and everyone who helped with setting up, packing up, recording and so on
  • our guests from Microsoft, for being such good sports
  • and of course, our community

P.S. Happy Invasion Day to Australians, and happy Anti-Invasion Day to Indians 🙂

Megahertz marketing

Stuart Corner at iTWire succumbs to our old nemesis, corporate marketing.

Intel have for years pushed the line that megahertz (MHz) equals speed. Apple used to call this the ‘Megahertz Myth‘. Intel competitors AMD and Cyrix were for many years forced to resort to using a ‘Performance Rating‘ system in order to compete. The fact is that computing performance is far more complicated than raw clock speed.

As the marketing droids at Intel gained political superiority within the company in the late 1990s, its architectures devolved into marketectures. The Pentium 4’s NetBurst is a classic example. Unleashed in 2000, in the wake of Intel’s loss to AMD in the race to release the first 1GHz chip, it was widely panned for being slower than similarly-clocked Pentium 3s in some tests. While less efficient clock-for-clock, it was designed to ramp-up in MHz to beat AMD in sheer marketing power.

In recent years, Intel have been hitting the limits of their own fallacy. Higher clock frequencies generate more heat and consume more power, and start pushing the physical limits of the media. You may have noticed the shift in Intel marketing from megahertz to composite metrics like ‘performance per watt‘. What they are trying to indicate is that they are innovating in all parts of the CPU — not just the clock speed — to deliver greater overall performance. Through greater efficiencies, they are able to improve performance per clock cycle, whilst also addressing heat and power usage (which is especially important in portable devices and datacentres).

You should also notice Intel’s sudden emphasis in recent years on model numbers (e.g. ‘Core 2 Duo T7200’) rather than just MHz (e.g. ‘Pentium 4 3.0 GHz’). They are trying to shift the market away from the myth that they so effectively perpetuated over a series of decades. My laptop’s Core 2 Duo T7200 (2.0 GHz) is clearly faster than my Pentium 4 desktop running at the same clock speed. Reasons for this include (but are not limited to) the presence of two cores (each running at 2GHz), faster RAM and a much larger cache.

It is interesting to note that the design of the current Core line of CPUs (and its Pentium M predecessor) owes far more to the Pentium 3 than to the marketing-driven Pentium 4.

Now, Stuart makes the mistake of presuming that Intel’s CPUs are not getting any faster since they have not increased in megahertz. Instead of berating Intel for finally being honest, why can’t we praise them? Addressing real performance (not some ‘MHz’ deception), including the previously-ignored factors of power consumption and heat generation, is of benefit to us all.

If there is anyone to criticise, it is the hardware vendors. They have successfully countered Intel’s message by continuing to market their systems using MHz as a key selling point. The general public (and evidently most of the press) are left to believe that computers aren’t getting any faster. Given the convenience of a single number as an indicator of performance, who can blame them?

When end-user experience is taken into account, software developers fall under the microscope. Windows Vista is the obvious posterchild — I’ve seen dual-core 2GB systems that once flew with GNU/Linux and (even) Windows XP, now crippled to the speed of contintental drift after being subjected to the Vista torture.

Update: The article’s content seems to have been edited to remove any criticism of Intel, but the sceptical title (‘Intel’s new chips extend Moore’s Law, or do they?‘) remains.

Update 2: Now that I have explained that megahertz on its own is only of minor consequence to CPU performance (leave alone overall system performance), we can see that it is often not even a conclusive way to compare different CPUs. A Pentium 4 can be slower than a similarly clocked Pentium 3. This inability to compare becomes even more stark when scrutinising completely different processor families. Apple had a point when they trumpeted the “Megahertz Myth’ back when they were using PPC CPUs. Clock-for-clock, a PPC CPU of that era was faster than the corresponding (by MHz) Intel chip, often by a considerable margin. Apple countered Intel with benchmarks demonstrating the speed of their CPU versus Intel’s. Benchmark quality aside, their intent was to show that a seemingly ‘slower’ PPC chip could outperform its Intel competition. It is a shame that the promotion didn’t convince more of the general populace.

LotD: Real Amber vs Photoshopped Amber

Will Linux succeed on the desktop?

iTnews rehashes the old refrain of ‘Why Linux won’t succeed on the desktop‘ articles.

These sorts of articles come out all the time, and they are always written by people who have not used Linux much and therefore don’t understand how it works and how it is developed. The article is not without merit, but it does display many misunderstandings. Most telling are the omissions — the fact that the real strengths of Linux are ignored and the deficiencies of Windows overlooked. It gives undue weight to proprietary software development and totally forgets about the free alternatives that are available for Linux. And by ‘free’, I mean the proper ‘free as in freedom’ definition, not the tired-old ‘freeware’ misconception that the author makes. As for the antique ‘too many distros’ argument, people only need to use one, and some quick reading would easily narrow the choices down to a small handful, if not one. I personally find the different ‘distros’ of Windows (including WINCE and so on) to be more confusing.

Most Linux people are very well versed in Windows, so they generally know of which they speak. My experience is that many Windows people expect everything to work exactly like Windows, and they complain whenever something is even slightly different, even if it is better. For some reason, they accept crashing, viruses and poor security as a fact of life, and so aren’t attracted to Linux. In fact, it goes further than that: to most people, Windows is computing. Anything else is just heresy.

These critical articles about Linux aren’t new, but they should not be ignored. Linux has many rough edges to smooth out, but then again so does Windows. At the end of the day, it often comes down to people being set in their ways and being afraid of the unfamiliar.

I’ve seen this happen even with Microsoft products: Windows Live Messenger, Internet Explorer 7, Office 2007 (Word, Excel, Powerpoint, but mysteriously not consistently in Outlook) and Windows Vista have been widely criticised for adopting odd and inconsistent interfaces. The first three lack a basic menu bar (each using its own weird alternative), and Vista doesn’t have a Start button (it’s a round circle with a Windows logo). It’s a tech support nightmare. Yet despite the resistance, people force themselves so that they eventually accept them. Some even grow to defend the changes. What possessed people to behave in this way? Is it the marketing, or even the cult of personality that Bill Gates has managed to build, as the article proclaims? We are now in a position where it is easier for an MS Office 2003 user to move to OpenOffice.org than to Office 2007. Why aren’t we seeing this happening more often?

Never underestimate the power of inertia and marketing.

The fact that Linux can prove to be such a great system despite its miniscule desktop market share and lack of resources compared to the proprietary world (which is much bigger than just Microsoft) shows the strength of the free and open source software (FOSS) model. One needs only to look at Mac OS X to see a desktop that is almost unquestionably superior to Windows in every way, thanks in part to its extensive use of FOSS.

Another thing to remember is that the desktop computing market is but a tiny fraction of the overall information and communications technology sector. Linux is quite prevalent, and even dominant, almost everywhere else [PDF]. In most of these markets, Microsoft isn’t represented at all.

By the way, the ‘year of the Linux desktop’ thing is not taken seriously by more established Linux users. The phrase is used mainly by journalists looking for attention, or by more recent Linux users. For everyone else, it’s become more of a running joke, much like Linus Torvalds’ faux ambition of ‘world domination‘.

 

Update:  Yet more reasons for why Linux is supposedly unsuitable for the desktop.

Update 2:  Here’s another rebuttal to these articles. 

 

LotD:  I failed basic chemistry 

Microsoft’s “Commitment” to Apple

In January, during Steve Jobs’ Macworld keynote speech announcing the new Intel Macintoshes, Microsoft made a "commitment" to continue to develop for and support the Apple Macintosh platform. In true Microsoft style, they conveniently didn’t explain how deeply that commitment went. Now we know.

Microsoft have decided to kill their Virtual PC product, removing a virtualisation option from Mac OS at the same time that similar functionality is being folded into Windows Vista. They are aware that virtualisation is quickly becoming a killer feature, and they’ll be damned if they allow anyone else to have it. Fortunately (and probably preferably), VMware will be made available for the Apple Intel platform. There’s also Parallels, but they still do not have any server-oriented products.

In addition, Microsoft will also be removing Visual Basic support from Office for Mac. If you can’t stop supporting something, why not cripple it instead? Many businesses are dependent upon VB scripting, or exchange files with people/organisations that make use of it, so this is a major blow indeed for Apple. Such a deficiency will be subtle: people will purchase Office for Mac expecting it to work with files created in its Windows counterpart (or vice versa), and will be sorely disappointed. This has already been occurring for a number of years, but the problem is becoming increasingly acute.

Ultimately, the best solution is to remove our dependency on proprietary formats and languages, for they are the root cause of this mess. OpenOffice.org already does what most people require, and in some cases it does it better. It even has growing support for Visual Basic for Applications. OpenOffice is truly looking like a better Office than MS Office. The OpenOffice file filtering support developers work hard to support all the MS Office formats they can find (people have counted over 20 different versions of the Word .DOC format alone), which is more than I can say about the MS Office programmers, who are notorious for breaking compatibilty with older versions. Using the OpenDocument formats (which are now an ISO standard) assures that your data will be accessible on many different platforms for many years to come.

The main stumbling block to OpenOffice adoption on Mac OS is the Windows-like interface and its reliance on X11 for display. There is work being done on a native Aqua version, but in the meanwhile there is NeoOffice.

Selling ice to an Eskimo

Steve "Reality Distortion Field" Jobs has delivered his keynote address to Apple’s World Wide Developer Conference (WWDC). It’s amazing what he would have us believe. Apple has apparently invented virtual desktops. What does Microsoft have to say about it, given they applied for a patent on the technology in 2004 (complete with images ripped out of GNOME and KDE!)? Let’s just forget that they have existed since at least 1985, shall we?

That aside, I am heartened to see that OpenDarwin did not close their doors a couple of weeks ago in vain. Apple themselves are sponsoring Mac OS Forge, and in the process they have made readily available the source code for Bonjour, Collaboration (Darwin Calendar Server), WebKit (which is really just KHTML on steroids anyway), Launchd and even their XNU kernel (minus some essential proprietary parts). They have even licensed some of these projects under the Apache Licence 2.0. I pray that this signifies the start of a new era of collaboration between Apple and the FLOSS community, and not just a cheap attempt to contribute the minimum amount required to keep the bulk of the community on-side.

So with Tiger being favourably compared to the forever-delayed Windows Vista, what does that make Leopard? Mac OS just gets better and better, while the Windows debacle is far from over. With screw-ups such as this , it’s no wonder that Microsoft feels the need to prevent/destroy all competition.

 

Update (2006-08-13): Here is a much more sober evaluation of the so-called ‘copying’ going on between Mac OS and Windows. It puts everything into more perspective, showing that some of their killer features in fact originated elsewhere. It reminds me of a funny quotation: "Mac OS, Windows, BeOS: they’re all just Xerox copies."

As much as Paul Thurrott likes to claim that Spotlight is a copy of Windows Search, Apple had the same functionality in the mid-1990s with its Copland Project.

Mockups & KDE4

KDE4 development is underway, and users and developers are having their say on how it should look. One thing that irks me is when someone posts a mockup of some ‘new’ idea, when in fact that idea is just lifted from somewhere else. I have no problem with derivation or inspiration from elsewhere (that’s how software evolves, after all), but for ghod‘s sake please don’t pass off some other idea as your own.

Take for example this mockup. Look at the file browser. Can you say Windows Vista? Some person, whom I pray is not a Konqueror developer, was so enamoured with it that he created an interactive version.

I’m not saying that it is unattractive, but I don’t understand why this sort of blind copying takes place. I’ll admit that graphic design isn’t one of FLOSS’s strong points, but with that said we do have some truly innovative and beautiful designs. Amarok comes to mind.