Category Archives: Hardware

A Complete Literacy Experience For Young Children

From the “I should have posted this months ago” vault…

When I led technology development at One Laptop per Child Australia, I maintained two golden rules:

  1. everything that we release must ‘just work’ from the perspective of the user (usually a child or teacher), and
  2. no special technical expertise should ever be required to set-up, use or maintain the technology.

In large part, I believe that we were successful.

Once the more obvious challenges have been identified and cleared, some more fundamental problems become evident. Our goal was to improve educational opportunities for children as young as possible, but proficiently using computers to input information can require a degree of literacy.

Sugar Labs have done stellar work in questioning the relevance of the desktop metaphor for education, and in coming up with a more suitable alternative. This proved to be a remarkable platform for developing a touch-screen laptop, in the form of the XO-4 Touch: the icons-based user interface meant that we could add touch capabilities with relatively few user-visible tweaks. The screen can be swivelled and closed over the keyboard as with previous models, meaning that this new version can be easily converted into a pure tablet at will.

Revisiting Our Assumptions

Still, a fundamental assumption has long gone unchallenged on all computers: the default typeface and keyboard. It doesn’t at all represent how young children learn the English alphabet or literacy. Moreover, at OLPC Australia we were often dealing with children who were behind on learning outcomes, and who were attending school with almost no exposure to English (since they speak other languages at home). How are they supposed to learn the curriculum when they can barely communicate in the classroom?

Looking at a standard PC keyboard, you’ll see that the keys are printed with upper-case letters. And yet, that is not how letters are taught in Australian schools. Imagine that you’re a child who still hasn’t grasped his/her ABCs. You see a keyboard full of unfamiliar symbols. You press one, and on the screen pops up a completely different looking letter! The keyboard may be in upper-case, but by default you’ll get the lower-case variants on the screen.

A standard PC keyboard
A standard PC keyboard

Unfortunately, the most prevalent touch-screen keyboard on the marke isn’t any better. Given the large education market for its parent company, I’m astounded that this has not been a priority.

The Apple iOS keyboard
The Apple iOS keyboard

Better alternatives exist on other platforms, but I still was not satisfied.

A Re-Think

The solution required an examination of how children learn, and the challenges that they often face when doing so. The end result is simple, yet effective.

The standard OLPC XO mechanical keyboard (above) versus the OLPC Australia Literacy keyboard (below)
The standard OLPC XO mechanical keyboard (above) versus the OLPC Australia Literacy keyboard (below)

This image contrasts the standard OLPC mechanical keyboard with the OLPC Australia Literacy keyboard that we developed. Getting there required several considerations:

  1. a new typeface, optimised for literacy
  2. a cleaner design, omitting characters that are not common in English (they can still be entered with the AltGr key)
  3. an emphasis on lower-case
  4. upper-case letters printed on the same keys, with the Shift arrow angled to indicate the relationship
  5. better use of symbols to aid instruction

One interesting user story with the old keyboard that I came across was in a remote Australian school, where Aboriginal children were trying to play the Maze activity by pressing the opposite arrows that they were supposed to. Apparently they thought that the arrows represented birds’ feet! You’ll see that we changed the arrow heads on the literacy keyboard as a result.

We explicitly chose not to change the QWERTY layout. That’s a different debate for another time.

The Typeface

The abc123 typeface is largely the result of work I did with John Greatorex. It is freely downloadable (in TrueType and FontForge formats) and open source.

After much research and discussions with educators, I was unimpressed with the other literacy-oriented fonts available online. Characters like ‘a’ and ‘9’ (just to mention a couple) are not rendered in the way that children are taught to write them. Young children are also susceptible to confusion over letters that look similar, including mirror-images of letters. We worked to differentiate, for instance, the lower-case L from the upper-case i, and the lower-case p from the lower-case q.

Typography is a wonderfully complex intersection of art and science, and it would have been foolhardy for us to have started from scratch. We used as our base the high-quality DejaVu Sans typeface. This gave us a foundation that worked well on screen and in print. Importantly for us, it maintained legibility at small point sizes on the 200dpi XO display.

On the Screen

abc123 is a suitable substitute for DejaVu Sans. I have been using it as the default user interface font in Ubuntu for over a year.

It looks great in Sugar as well. The letters are crisp and easy to differentiate, even at small point sizes. We made abc123 the default font for both the user interface and in activities (applications).

The abc123 font in Sugar's Write activity, on an XO laptop screen
The abc123 font in Sugar’s Write activity, on an XO laptop screen

Likewise, the touch-screen keyboard is clear and simple to use.

The abc123 font on the XO touch-screen keyboard, on an XO laptop screen
The abc123 font on the XO touch-screen keyboard, on an XO laptop screen

The end result is a more consistent literacy experience across the whole device. What you press on the hardware or touch-screen keyboard will be reproduced exactly on the screen. What you see on the user interface is also what you see on the keyboards.

“Linux” support

Carla Schroder from Linux Today repeats a question that I’ve heard asked many times:

“Here we go with another round of Linux Today reader comments. Let’s start off with an issue that has been on my mind: Vendors who boast of the their Linux-based devices, but they only support Windows and Mac clients. It’s a step in the right direction, but would supporting Linux clients be so difficult?”

There are two major mistakes that are often made in considering this question:

  • that all “Linux” systems are the same
  • that by using Linux in one place, it only makes sense that you support other “Linux” systems

We need to remember that the only thing most of these devices share with a desktop “Linux” system (or even with each other) is the kernel (i.e. the precise definition of “Linux”). The userland is different, and there’s a lot of their own proprietary stuff on it too. Even the hardware (such as CPU architecture) is often wildly different. I think people have grown to think it’s all the same since we call it all “Linux”, but it’s not.

Because of this practical conundrum (as totally distinct from any philosophical or other arguments), I have some sympathy for those who prefer to call the system we use on our desktop and server systems “GNU/Linux”.

Argue all you want about its accuracy, but the fact is that it is far more accurate than merely using the kernel name as nomenclature for the entire OS. It specifies a userland that with the kernel comprises a workable operating system. Come up with a better name if that makes you feel more comfortable.

This opens up a whole can of worms. If I’m an applications or device developer and I announce “Linux support”, what do I mean? Will it work on my mobile phone? On my television? Probably not. Chances are it refers to particular versions of particular distributions for a particular architecture.

If I produce a device that is based on “Linux”, what relation does that have to other “Linux” systems? None. It’s not just devices: another major culprit is Web services. Linux runs most of the Internet, but many online services are not compatible with desktop Linux systems.

The reasons for this are simple:

  • correlation does not imply causation
  • the small market size of desktop Linux users

The first point relates to what I said earlier, that there’s no connection between the use of Linux on servers and devices versus its use on desktop computers. The usefulness of Linux on servers and devices is firmly recognised in many sectors.

The same cannot be said for desktop systems, despite what we may wish. If it costs a developer more to support a tiny market, they are probably not going to do it. That’s just business. Companies that choose to support desktop Linux often do so for other reasons, such as to foster a developer/fan base or tap into a very specific set of users.

So everyone, I share your frustrations that many so-called “Linux”-based devices/services don’t interface with my computers, but I keep in mind the points made above.

LotD: NSW Police: Don’t use Windows for internet banking (iTnews)

Huawei e169 3G modem on Ubuntu 8.04

I recently bought myself a Huawei e169 3G modem as part of a service with Exetel (based on the Optus network). There are a few guides online on how to get it to work with GNU/Linux, but either they didn’t work as advertised or I wasn’t happy with the approach they took. Ubuntu 8.10 is due in three weeks, but since I usually wait at least a month for a new release to settle, I was after a solution that would tide me over for at minimum the next couple of months. It had to be simple and not too messy.

Here’s the approach I took:

  1. Install NetworkManager 0.7 from the PPA. You might need to reboot afterwards.
  2. Install usb_modeswitch. I got lazy and installed a DEB from here. Can someone confirm that this is included by default in Ubuntu 8.10?
  3. Right-click the NetworkManager panel applet and select Edit Connections.
  4. Select the Mobile Broadband tab and click Add.
  5. Follow the wizard/druid: select your country and upstream provider (I chose Optus 3G).
  6. Once the druid is complete, return to the Mobile Broadband tab, select your newly-created connection, and click Edit.
  7. The only setting I had to enter was my APN (exetel1). You may also wish to change the Type to Prefer 3G (3 customers can save $$$ by selecting 3G — thanks Telstra! :p ).

Now when you plug in your 3G modem, two things will happen (after a few seconds). Firstly, the ISO9660 filesystem on the USB stick will be automatically mounted and displayed by Nautilus (you might want to turn this off in the Nautilus preferences if it gets too annoying). Secondly, you should see an option to use your modem when you click on the NetworkManger panel applet. Once connected, you can disconnect in the same way.

There we go! Now all I need to do is plug in my modem and connect/disconnect from the NetworkManager panel applet. My Eee PC 901 is truly mobile now 🙂

LotD: A Sysadmin’s Unixersal Translator (ROSETTA STONE)

Annoying by design

Microsoft claim that their UAC security prompts in Vista are designed to annoy you. I’m trying hard to take them seriously and to not laugh them off… but did they really think it’d work? OEMs and users have been disabling it in droves. Other users have probably taught their muscle memory to automatically click the Continue/Allow button without the slightest acknowledgement or thought. I think Microsoft need to get their act together when it comes to UIs. Some of their recent efforts have been frustratingly inconsistent.

A major reason given by Microsoft in their UAC scandal was to encourage developers to avoid privilege elevations as much as possible. A noble cause, especially in the security-inexperienced world of Windows development, albeit poorly executed. It reminds me of Apple’s perpetual opposition to the multi-button mouse. One stated reason is to enforce more ‘sane’, ‘usable’ and consistent UI design, and overall I think they’ve done well. They don’t ban multi-button mice (‘XY-PIDSes‘?), but given the simple one-button default there’s less need for them. I might prefer using a conventional 3-button scroll mouse, or even Apple’s own Mighty Mouse (a cleverly-disguised multi-button mouse), but I don’t lose any functionality by not using them.

It goes to show how much the graphical interface can be influenced by its physical input, something a lot of us don’t acknowledge in today’s world of >100-key QWERTY keyboards, multi-button mice and multi-finger touchpads. The real innovation in that space seems to be happening in the mobile and embedded sector, the iPhone being a good example. Players of games on both desktop computers and games consoles might notice the difference in ‘look and feel’ between games designed for keyboard/mouse versus control pad. Particularly for action and strategy games, ports from desktop to console (or vice versa) often aren’t successful. The software was designed with the assumption of particular input devices, and anything that deviates from this will also alter the feel of the game.

LotD: Your Windows licence fees paid to make this

Megahertz marketing

Stuart Corner at iTWire succumbs to our old nemesis, corporate marketing.

Intel have for years pushed the line that megahertz (MHz) equals speed. Apple used to call this the ‘Megahertz Myth‘. Intel competitors AMD and Cyrix were for many years forced to resort to using a ‘Performance Rating‘ system in order to compete. The fact is that computing performance is far more complicated than raw clock speed.

As the marketing droids at Intel gained political superiority within the company in the late 1990s, its architectures devolved into marketectures. The Pentium 4’s NetBurst is a classic example. Unleashed in 2000, in the wake of Intel’s loss to AMD in the race to release the first 1GHz chip, it was widely panned for being slower than similarly-clocked Pentium 3s in some tests. While less efficient clock-for-clock, it was designed to ramp-up in MHz to beat AMD in sheer marketing power.

In recent years, Intel have been hitting the limits of their own fallacy. Higher clock frequencies generate more heat and consume more power, and start pushing the physical limits of the media. You may have noticed the shift in Intel marketing from megahertz to composite metrics like ‘performance per watt‘. What they are trying to indicate is that they are innovating in all parts of the CPU — not just the clock speed — to deliver greater overall performance. Through greater efficiencies, they are able to improve performance per clock cycle, whilst also addressing heat and power usage (which is especially important in portable devices and datacentres).

You should also notice Intel’s sudden emphasis in recent years on model numbers (e.g. ‘Core 2 Duo T7200’) rather than just MHz (e.g. ‘Pentium 4 3.0 GHz’). They are trying to shift the market away from the myth that they so effectively perpetuated over a series of decades. My laptop’s Core 2 Duo T7200 (2.0 GHz) is clearly faster than my Pentium 4 desktop running at the same clock speed. Reasons for this include (but are not limited to) the presence of two cores (each running at 2GHz), faster RAM and a much larger cache.

It is interesting to note that the design of the current Core line of CPUs (and its Pentium M predecessor) owes far more to the Pentium 3 than to the marketing-driven Pentium 4.

Now, Stuart makes the mistake of presuming that Intel’s CPUs are not getting any faster since they have not increased in megahertz. Instead of berating Intel for finally being honest, why can’t we praise them? Addressing real performance (not some ‘MHz’ deception), including the previously-ignored factors of power consumption and heat generation, is of benefit to us all.

If there is anyone to criticise, it is the hardware vendors. They have successfully countered Intel’s message by continuing to market their systems using MHz as a key selling point. The general public (and evidently most of the press) are left to believe that computers aren’t getting any faster. Given the convenience of a single number as an indicator of performance, who can blame them?

When end-user experience is taken into account, software developers fall under the microscope. Windows Vista is the obvious posterchild — I’ve seen dual-core 2GB systems that once flew with GNU/Linux and (even) Windows XP, now crippled to the speed of contintental drift after being subjected to the Vista torture.

Update: The article’s content seems to have been edited to remove any criticism of Intel, but the sceptical title (‘Intel’s new chips extend Moore’s Law, or do they?‘) remains.

Update 2: Now that I have explained that megahertz on its own is only of minor consequence to CPU performance (leave alone overall system performance), we can see that it is often not even a conclusive way to compare different CPUs. A Pentium 4 can be slower than a similarly clocked Pentium 3. This inability to compare becomes even more stark when scrutinising completely different processor families. Apple had a point when they trumpeted the “Megahertz Myth’ back when they were using PPC CPUs. Clock-for-clock, a PPC CPU of that era was faster than the corresponding (by MHz) Intel chip, often by a considerable margin. Apple countered Intel with benchmarks demonstrating the speed of their CPU versus Intel’s. Benchmark quality aside, their intent was to show that a seemingly ‘slower’ PPC chip could outperform its Intel competition. It is a shame that the promotion didn’t convince more of the general populace.

LotD: Real Amber vs Photoshopped Amber

One Laptop Per Child (AKA: The January Chronicles, Part II)

There was enough at LCA to be excited about to give you heart palpitations. If I was forced to single out one thing, it would have to be the One Laptop Per Child Project (OLPC).

One of my primary interests has been the interactions between people and technology, and I have long felt that there has been scant attention payed to how this operates in developing countries. Sustainable development is a vital goal, and an important part of this ongoing process is the use of appropriate technology. This can range from bare hands and rudimentary tools to complex computational and engineering infrastructure. The key is to select what is most applicable in a given situation.

So-called ‘developed’ regions of the world might be able to accommodate expensive, disposable and inefficient technologies and methodologies. This has guided policy, R&D, production, distribution and use within this part of the world. The playing field is entirely different in developing regions, and so solutions need to be crafted with their needs in mind.

You can’t expect to successfully shoehorn a solution designed for Sydney onto Mogadishu, or even onto Maningrida. To date, however, most approaches try to do just that. This only works to an extent, if at all. In many cases it would be better to rethink things from the ground-up to come up with something more appropriate. This doesn’t mean that you’re throwing out the baby with the bathwater. Successful designs often base themselves upon existing policies, technologies and ideas, and then proceed to modify or redesign parts to fit their goals. The OLPC is a prime example of such an endeavour.

Whether it is successful or not is another matter. That remains up to the governments which purchase and distribute them, and the communities which accept them. The greatest challenge of the OLPC isn’t technical, it’s socio-political.

My arms hurt

This is one of those fables with the moral "don’t be greedy".

A couple of weeks ago at college I spied a trolley loaded with books with the label "Take me!" The library was giving away old books to make space for new ones. There were plenty of interesting titles, ranging from basic PC repair to *NIX to programming. I collected a massive pile of books (If I could place them all on top of each other I think they would reach my waist).

I travel by public transport (I don’t own a car), so there was no way in hell I could take them all home at once. What’s more, I was working that evening and I had to take a bus to get there. I decided to take about half of them and I made arrangements with the instructors to leave the rest so I could take them the next day. It was a pain carting those books to work and back (especially since I normally return home around 10:30pm), but I managed it. The next day I took the rest directly home (thankfully I wasn’t working that day). No dramas.

Then on Monday I saw something else at college: free computers! They weren’t very good (AMD K6200 with 32MB RAM), but hey, they were free! There were only five of them and I didn’t want to miss out, so I decided to take two home at once. These were chunky: old-style AT desktop cases made from thick steel. Carrying them home was a nightmare. I had to take frequent breaks so that my arms could recover. I also had a heavy backpack.

I managed to get home with myself and the computers in one (or rather three) piece(s). My arms were almost numb. If I tried to raise my left hand to my face it would involuntarily shake. I could not straighten my left arm until two days ago. I can still feel a bit of muscle stretching when I do.

I still don’t know what I’m going to do with those computers. I don’t have any keyboards with AT connectors (I only have PS/2). I’ll have to give it some thought.

It’s funny what some people chuck out. A few months ago my mum found a perfectly working 63cm television set. Yesterday I was at my cousin’s house and I saw a computer monitor sitting on the side of the road. It was an old HP Pavilion 15in screen, and it was slightly damp since it had rained earlier in the day. I didn’t expect it to work, but I decided to pick it up anyway. Not knowing the frequencies of it, I decided to hook it up and boot with the PCLinuxOS Preview 8 liveCD and hope that it would be automatically be configured. Lo and behold, it was! KDE looked great running at 800×600 on it. I’ve been wanting to set my mum (who is essentially computer-illiterate) with a computer, but I didn’t have a monitor. This one will do fine.

‘X-Men 2′, ‘The Matrix Reloaded’ and assorted sci-fi

I saw X-Men 2 a few weeks ago. I’ve always been a fan of the comics, so I am rather sensitive to any ‘changes’ that are made just for the movie. However, I do realise that it is near-impossible to squeeze the entire X-Men universe into a 2-hour movie. I must conclude that they did an excellent job here. As in the first movie, the ‘changes’ were done very well.

There were a few little easter eggs hidden in there as well. In the first movie, you get a quick glimpse of Jubilee (the comic book character whom Rogue replaced in the movie), and just like in Spider Man (another fantastic movie) there is a short cameo by Stan Lee (This man is a GOD! If you don’t know who he is, stop reading right now for you have offended me.). In the second movie you hear Jubilee being called by name (by Storm), and on a television set you see a man with the caption "Dr Henry McCoy" beneath his face. The man appears as a normal (non-mutant) human being, but this man later becomes Beast. I think there were a few other easter eggs, but I don’t remember them.

Speaking of The X-Men, I found a great fan-comic, The Uncanny X-Sprites. Quite funny. I also stumbled across Wolverine’s real name. It’s not Logan, it’s James Howlett. It’s all explained in Marvel’s Origin series, which was released last year. There was also a Paradise X series which contradicts some of the fundamental aspects of Origin, but I wouldn’t take it seriously. Both of these (among others) are explained in vivid detail (beautifully illustrated, too!) at the Lost Soul Wolverine site. I spent hours reading all the stuff there; I was so riveted.

Last Sunday I saw The Matrix Reloaded. I am not going to compare it to X-Men 2, but I will say that this is another excellent film. The CGI was amazing. There were a few little flaws, but with all the action going on they were easy to overlook. I love Hong Kong martial arts movies (Jackie Chan and Jet Li are DEITIES!), and this movie satisifed my desire for some well-choreographed fight scenes. On the negative side, there is less continuity between the plot and the fights when compared to the original movie. Also, some parts were slow and unnecessary. I don’t want to see a bunch of Zionists (I assume that’s what the inhabitants of Zion call themselves?) dancing, and I don’t want to see Neo making love to Trinity. There’s enough pr0n on the Internet, thank-you-very-much.

Like the first movie (and the third, which arrives in November), The Matrix Reloaded was mostly filmed in my home town of Sydney. It’s weird to watch scenes from a movie and think, "hey, I was at that place only yesterday!" It also makes me wonder if I really am in the Matrix. Kooky.

The absolute coolest thing, however, was Trinity’s cracking of the electricity grid. She uses Nmap to scan for open ports and finds that port 22 is open. Port 22 is typically used by SSH, and sure enough Trinity uses a known SSH v. 1 exploit to gain access to the server! As her root password, she uses Z1ON1010. Not only does this make her 1337, it is also another easter egg – 1010 is the number 5 in binary (or so I’m told), and if you’ve seen the movie (spoiler alert) you know that Zion in the movie is in its fifth incarnation. More on this at The Register and Slashdot, and there’s a nice screenshot at, the home of Nmap.

Of course, what’s a movie these days without merchandising? Samsung has a ‘limited edition’ version of one of the phones used in the movie. To me it looks like a forgotten prop from Star Trek: The Original Series. It looks hideous, the ergonomics are all wrong, and the screen is too small to do anything useful. That won’t stop Samsung from charging a premium for it, or people from buying it. I feel sorry for those people. They obviously have some sort of psychological problem that has them convinced that they will only have friends if they have the latest mobile telephone. If it’s movie-themed and a ‘limited edition’, even better. They may even purchase a black trenchcoat to go with it. That will alleviate the symptioms of their inferiority complex for a little while, after which they will feel compelled to jump onto the next fad. Over-consumerism should be treated as a mental illness.

“Summer lovin’, had me a blast…”

I love Grease, don’t you? There’s some logic in the title. It is summer here in Australia, and as many may know Australian summers are typically very hot and dry. A lot has happened over the past few weeks and I’ve been too lazy to type it out here. I’ll split things into several entries for the sake of readability.

Back in July, I bought myself a nice new Athlon 2100+ system. This machine is lightyears ahead of my old Pentium II 350, and now I can do many things that wern’t practical on the old system. When I got the machine, I put it through a rigorous barrage of tests, including memtest86, heavy compiling and cpuburn. It passed with flying colours.

However, in the past couple of months, I’ve been having problems with heat. When I ran the tests, it was the middle of winter. Now it is summer, and room temperatures can easily hit 35 degrees or more. Using lm_sensors, I found that my CPU was about 70 degrees or more on a hot day – and that’s just at idle. If I tried compiling something or playing a game like Quake 3 or Unreal Tournament, it would easily go past 85 degrees. This triggers the overheat protection system on my ASUS A7V333 motherboard to shut the computer down (an Athlon can only take 90 degrees before frying itself). I’ve been saved many times by that – had my motherboard not had that feature (most boards don’t) I would’ve lost my CPU.

I had to use my system very carefully to prevent shutdown. This is obviously unacceptable, but I had to wait until mid-December before I could do anything about it (I was busy with other things). The heatsink on my CPU was standard AMD-issue – nothing special. I decided to purchase something better, finally settling on the Thermaltake Volcano 9. I made an order on an online shopping site and much to my surprise it was delivered only three hours later! The owner of the store lives only a block or two away from me, and he decided to deliver it himself on his way home. Now that’s what I call service!

I don’t trust myself with expensive equipment (I’ll mess around with older/cheaper stuff, though), so I decided to get the heatsink installed by the guy I bought my computer from. He’s a nice guy, and I’ve been dealing with him for a number of years, so I know he’s good. I opened the heatsink box for the first time. This thing is a monster! It was so big that we couldn’t install it without taking the motherboard out. It sounds like a helicopter, but over time I’ve gotten used to the noise. What’s important is that I can use my system at full throttle without fear of burning it out.

glibc blues

I haven’t posted any articles on PCLinuxOnline over the past three weeks because I b0rked my Gentoo system. I upgraded from glibc 2.2.5 to 2.3.1 and since then I haven’t been able to run certain apps without wrecking everything else. I’ve detailed my problem here and here. If anyone can help I’d much appreciate it.

At the moment I can run most apps, but things screw up when I load any part of KDE (including Konqueror) or Evolution. GTK+ (1 and 2) apps (apart from Evolution) work fine.

Update [2003-03-07]: The problem is with my Nvidia drivers:

Hi! I’m the guy who started this thread. I finally managed to fix things by turning off Grsecurity in my kernel. However, a very similar (but different) problem emerged a few months later. It occurred around the time I upgraded glibc to 2.3.1, so I initially thought glibc was to blame. After lots of experimenting with kernel configs, I discovered that I could have a stable system using Nvidia drivers if I turned highmem off, sacrificing just over 100MB of RAM (I have 1GB total).

I then came across cigaraficionado’s bug report and updated nvidia-kernel ebuild. I compiled a new kernel, this time turning highmem back on, and installed the new ebuild. The updated ebuild had no effect — using the Nvidia driver made my system unstable like before.

My hardware seems fine. Memtest86 detects no errors in my RAM (2x Corsair XMS 512MB DDR333 SDRAM). My GeForce 3 Ti200 card works perfectly in Windows and it worked perfectly in Gentoo until December, around the time I upgraded to glibc 2.3.1. I can’t figure out where the true problem is, but I strongly suspect it lies with nvidia-kernel.

That’s what you get for relying on binary-only kernel modules 🙁