A Venn diagram can be a great way to explain a business concept. This is generally not difficult to create in modern presentation software. I often use Google Slides for its collaboration abilities.
Where it becomes difficult is when you want to add a unique colour/pattern to an intersection, where the circles overlap. Generally you will either get one circle overlapping another, or if you set some transparency then the intersection will become a blend of the colours of the circles.
I could not work out how to do this in Google Slides, so on this occasion I cheated and did it in Microsoft PowerPoint instead. I then imported the resulting slide into Slides.
This worked for me in PowerPoint for Mac 2016. The process is probably the same on Windows.
Firstly, create a SmartArt Venn Diagram
Insert > SmartArt > Relationship > Basic Venn
Separate the Venn circles
SmartArt Design > Convert > Convert to Shapes
Shape Format > Group Objects > Ungroup
Split out the intersections
Shape Format > Merge Shapes > Fragment
From there, you can select the intersection as an independent shape. You can treat each piece separately. Try giving them different colours or even moving them apart.
This can be a simple but impactful way to get your point across.
Over the past few years it has seemed like LinkedIn were positioning themselves to take over your professional address book. Through offering CRM-like features, users were able to see a summary of their recent communications with each connection as well as being able to add their own notes and categorise their connections with tags. It appeared to be a reasonable strategy for the company, and many users took the opportunity to store valuable business information straight onto their connections.
Then at the start of 2017 LinkedIn decided to progressively foist a new user experience upon its users, and features like these disappeared overnight in lieu of a more ‘modern’ interface. People who grew to depend on this integration were in for a rude shock — all of a sudden it was missing. Did LinkedIn delete the information? There was no prior warning given and I still haven’t seen any acknowledgement or explanation (leave alone an apology) from LinkedIn/Microsoft on the inconvenience/damage caused.
If anything, this reveals the risks in entrusting your career/business to a proprietary cloud service. Particularly with free/freemium (as in cost) services, the vendor is more likely to change things on a whim or move that functionality to a paid tier.
Fortunately there’s a way to export all of your data from LinkedIn. This is what we’ll use to get back your tags and notes. These instructions are relevant for the new interface. Go to your account settings and in the first section (“Basics”) you should see an option called “Getting an archive of your data”.
Click on Request Archive and you’ll receive an e-mail when it’s available for download. Extract the resulting zip file and look for a file called Contacts.csv. You can open it in a text editor, or better yet a spreadsheet like LibreOffice Calc or Excel.
In my copy, my notes and tags were in columns D and E respectively. If you have many, it may be a lot of work to manually integrate them back into your address book. I’d love suggestions on how to automate this. Since I use Gmail, I’m currently looking into Google’s address book import/export format, which is CSV based.
As long as Microsoft/LinkedIn provide a full export feature, this is a good way to maintain ownership of your data. It’s good practice to take an export every now and then to give yourself some peace-of-mind and avoid vendor lock-in.
From the “I should have posted this months ago” vault…
When I led technology development at One Laptop per Child Australia, I maintained two golden rules:
everything that we release must ‘just work’ from the perspective of the user (usually a child or teacher), and
no special technical expertise should ever be required to set-up, use or maintain the technology.
In large part, I believe that we were successful.
Once the more obvious challenges have been identified and cleared, some more fundamental problems become evident. Our goal was to improve educational opportunities for children as young as possible, but proficiently using computers to input information can require a degree of literacy.
Sugar Labs have done stellar work in questioning the relevance of the desktop metaphor for education, and in coming up with a more suitable alternative. This proved to be a remarkable platform for developing a touch-screen laptop, in the form of the XO-4 Touch: the icons-based user interface meant that we could add touch capabilities with relatively few user-visible tweaks. The screen can be swivelled and closed over the keyboard as with previous models, meaning that this new version can be easily converted into a pure tablet at will.
Revisiting Our Assumptions
Still, a fundamental assumption has long gone unchallenged on all computers: the default typeface and keyboard. It doesn’t at all represent how young children learn the English alphabet or literacy. Moreover, at OLPC Australia we were often dealing with children who were behind on learning outcomes, and who were attending school with almost no exposure to English (since they speak other languages at home). How are they supposed to learn the curriculum when they can barely communicate in the classroom?
Looking at a standard PC keyboard, you’ll see that the keys are printed with upper-case letters. And yet, that is not how letters are taught in Australian schools. Imagine that you’re a child who still hasn’t grasped his/her ABCs. You see a keyboard full of unfamiliar symbols. You press one, and on the screen pops up a completely different looking letter! The keyboard may be in upper-case, but by default you’ll get the lower-case variants on the screen.
Unfortunately, the most prevalent touch-screen keyboard on the marke isn’t any better. Given the large education market for its parent company, I’m astounded that this has not been a priority.
Better alternatives exist on other platforms, but I still was not satisfied.
The solution required an examination of how children learn, and the challenges that they often face when doing so. The end result is simple, yet effective.
This image contrasts the standard OLPC mechanical keyboard with the OLPC Australia Literacy keyboard that we developed. Getting there required several considerations:
a new typeface, optimised for literacy
a cleaner design, omitting characters that are not common in English (they can still be entered with the AltGr key)
an emphasis on lower-case
upper-case letters printed on the same keys, with the Shift arrow angled to indicate the relationship
better use of symbols to aid instruction
One interesting user story with the old keyboard that I came across was in a remote Australian school, where Aboriginal children were trying to play the Maze activity by pressing the opposite arrows that they were supposed to. Apparently they thought that the arrows represented birds’ feet! You’ll see that we changed the arrow heads on the literacy keyboard as a result.
We explicitly chose not to change the QWERTY layout. That’s a different debate for another time.
After much research and discussions with educators, I was unimpressed with the other literacy-oriented fonts available online. Characters like ‘a’ and ‘9’ (just to mention a couple) are not rendered in the way that children are taught to write them. Young children are also susceptible to confusion over letters that look similar, including mirror-images of letters. We worked to differentiate, for instance, the lower-case L from the upper-case i, and the lower-case p from the lower-case q.
Typography is a wonderfully complex intersection of art and science, and it would have been foolhardy for us to have started from scratch. We used as our base the high-quality DejaVu Sans typeface. This gave us a foundation that worked well on screen and in print. Importantly for us, it maintained legibility at small point sizes on the 200dpi XO display.
On the Screen
abc123 is a suitable substitute for DejaVu Sans. I have been using it as the default user interface font in Ubuntu for over a year.
It looks great in Sugar as well. The letters are crisp and easy to differentiate, even at small point sizes. We made abc123 the default font for both the user interface and in activities (applications).
Likewise, the touch-screen keyboard is clear and simple to use.
The end result is a more consistent literacy experience across the whole device. What you press on the hardware or touch-screen keyboard will be reproduced exactly on the screen. What you see on the user interface is also what you see on the keyboards.
Australia poses some of its own challenges. As a country that is 90% urbanised, the remaining 10% are scattered across vast distances. The circumstances of these communities often share both developed and developing world characteristics. We developed the One Education programme to accommodate this.
These lessons have been developed further into Unleash Kids, an initiative that we are currently working on to support the community of volunteers worldwide and take to the movement to the next level.
Adobe is dropping Linux support for their Adobe AIR development platform. To be honest, I don’t really care. Why? Because I’ve been careful enough to not tie my efforts to a proprietary platform.
I’ve had several groups offer to write applications/activities for OLPC Australia using proprietary tools like AIR. I’ve discouraged them every time. Had we gone with the ‘convenient’ route and acquiesced, we would have been in quite a spot of bother right now. My precious resources would have to be spent on porting or rewriting all of that work, or just leaving it to bit-rot.
A beauty of Sugar and Linux is that they are not dependent on a single entity. We can develop with the confidence of knowing that our code will continue to work, or at least can be made to continue to work in the face of underlying platform changes. This embodies our Core Principle #5, Free and Open.
Free and Open means that children can be content creators. The television age relegated children (and everyone, for that matter) to just being consumers of content. I have very fond childhood memories of attempts to counter that, but those efforts pale in comparison to the possibilities afforded to us today by modern digital technologies. We now have the opportunity to properly enable children to be in charge of their learning. Education becomes active, not passive. There’s a reason why we refer to Sugar applications as activities.
Growing up in the 80s, my recollections are of a dynamic computing market. Machines like the ZX Spectrum and the early Commodore models inspired a generation of kids into learning about how computers work. By extension, that sparked interest in the sciences: mathematics, physics, engineering, etc.. Those machines were affordable and quite open to the tinkerer. My first computer (which from vague recollection was a Dick Smith VZ200) had only a BASIC interpreter and 4k of memory. We didn’t purchase the optional tape drive, so I had to type my programs in manually from the supplied book. Along the way, I taught myself how to make my own customisations to the code. I didn’t need to learn that skill, but I choose to take the opportunity presented to me.
Likewise, I remember (and still have in my possession, sadly without the machine) the detailed technical binders supplied with my IBM PC. I think I recognised early on that I was more interested in software, because I didn’t spend as much time on the supplied hardware schematics and documentation. However, the option was there, and I could have made the choice to get more into hardware.
Those experiences were very defining parts of my life, helping to shape me into the Free Software, open standards loving person I am. Being able to get involved in technical development, at whatever level of my choosing, is something I was able to experience from a very early age. I was able to be active, not just consume. As I have written about before, even the king of proprietary software and vendor lock-in himself, Bill Gates, has acknowledged a similar experience as a tipping point in his life.
With this in mind, I worry about the superficial solutions being promoted in the education space. A recent article on the BBC’s Click laments that children are becoming “digitally illiterate”. Most of the solutions proposed in the article (and attached video) are highly proprietary, being based on platforms such as Microsoft’s Windows and Xbox. The lone standout appears to be the wonderful-looking Raspberry Pi device, which is based on Linux and Free Software.
It is disappointing that the same organisation that had the foresight to give us the BBC Computer Literacy Project (with the BBC Micro as its centrepiece) now appears to have disregarded a key benefit of that programme. By providing the most advanced BASIC interpreter of the time, the BBC Micro was well suited to education. Sophisticated applications could be written in an interpreted language that could be inspected and modified by anyone.
Code is like any other form of work, whether it be a document, artwork, music or something else. From a personal perspective, I want to be able to access (read and modify) my work at any time. From an ethical perspective, we owe it to our children to ensure that they continue to have this right. From a societal perspective, we need to ensure that our culture can persevere through the ages. I have previously demonstrated how digital preservation can dramatically reduce the longevity of information, comparing a still-legible thousand-year-old book against its ‘modern’ laserdisc counterpart that became virtually undecipherable after only sixteen years. I have also explained how this problem presents a real and present danger to the freedoms (at least in democratic countries) that we take for granted.
We’re working to make sure every school has a 21st-century curriculum like you do. And in the same way that we invested in the science and research that led to the breakthroughs like the Internet, I’m calling for investments in educational technology that will help create digital tutors that are as effective as personal tutors, and educational software that’s as compelling as the best video game. I want you guys to be stuck on a video game that’s teaching you something other than just blowing something up.
“Here we go with another round of Linux Today reader comments. Let’s start off with an issue that has been on my mind: Vendors who boast of the their Linux-based devices, but they only support Windows and Mac clients. It’s a step in the right direction, but would supporting Linux clients be so difficult?”
There are two major mistakes that are often made in considering this question:
that all “Linux” systems are the same
that by using Linux in one place, it only makes sense that you support other “Linux” systems
We need to remember that the only thing most of these devices share with a desktop “Linux” system (or even with each other) is the kernel (i.e. the precise definition of “Linux”). The userland is different, and there’s a lot of their own proprietary stuff on it too. Even the hardware (such as CPU architecture) is often wildly different. I think people have grown to think it’s all the same since we call it all “Linux”, but it’s not.
Because of this practical conundrum (as totally distinct from any philosophical or other arguments), I have some sympathy for those who prefer to call the system we use on our desktop and server systems “GNU/Linux”.
Argue all you want about its accuracy, but the fact is that it is far more accurate than merely using the kernel name as nomenclature for the entire OS. It specifies a userland that with the kernel comprises a workable operating system. Come up with a better name if that makes you feel more comfortable.
This opens up a whole can of worms. If I’m an applications or device developer and I announce “Linux support”, what do I mean? Will it work on my mobile phone? On my television? Probably not. Chances are it refers to particular versions of particular distributions for a particular architecture.
If I produce a device that is based on “Linux”, what relation does that have to other “Linux” systems? None. It’s not just devices: another major culprit is Web services. Linux runs most of the Internet, but many online services are not compatible with desktop Linux systems.
The reasons for this are simple:
correlation does not imply causation
the small market size of desktop Linux users
The first point relates to what I said earlier, that there’s no connection between the use of Linux on servers and devices versus its use on desktop computers. The usefulness of Linux on servers and devices is firmly recognised in many sectors.
The same cannot be said for desktop systems, despite what we may wish. If it costs a developer more to support a tiny market, they are probably not going to do it. That’s just business. Companies that choose to support desktop Linux often do so for other reasons, such as to foster a developer/fan base or tap into a very specific set of users.
So everyone, I share your frustrations that many so-called “Linux”-based devices/services don’t interface with my computers, but I keep in mind the points made above.
The OpenJDK plug-in that comes with modern distros is usually very good at handling Java in Web pages, but some applets are just stubborn. Thankfully, Sun have finally (after over six years!) released a plug-in for x86_64 Web browsers.
I managed to get the JDK version working on Fedora 11 and CentOS 5.3. Here’s the process.
Firstly, download the JRE or JDK from Sun. You’ll need to get version 1.6 Update 12 or above. I got the RPM version.
Run the install script to extract the bundle. On the RPM version, this automatically installs it to your system if you run the script as root.
Last month I proposed that the FOSS community create an integrated software installer for Windows and Mac OS that only included FOSS applications. If Google can make Google Pack, I opined, why can’t we make a FOSS Pack?
As I had expected, my idea was already realised, at least in part. WinLibre and MacLibre provide a menu of free/libre software packages for the user to choose from, and can automatically install them for you.
That’s a big step in the right direction, albeit not the beauty we have on GNU/Linux through tools like Add/Remove Applications and apt-url. It haven’t tried them (I rarely use Windows and I don’t have a Mac), but here’s what I think they need to truly shine (based on my last post on the subject):
an updates management service, that automatically checks for available updates and installs them for you
an ability to cleanly remove the software just as easily as it was installed
a file system scanner that recommends FOSS software to install, based on the software and file types it finds on the hard drive
Just for a second, put yourself in the shoes of an average PC user. You use the software that came with your computer, plus perhaps some others that you downloaded, bought in a box or ‘borrowed’ from a friend. You’ve heard some good things about something called “open source”, but you haven’t the foggiest clue of where to get it or what applications to try. You aren’t a technical person, have limited time and even less patience. Ultimately, you’re looking for something that ‘just works’ and is either free (of cost) or clearly better than what you’re using now. Why make the effort otherwise? Honestly, you’d rather be down at the pub watching the cricket with your mates.
How would free software advocates best woo such a person into their camp? They aren’t going to immediately repartition their hard drive and use GNU/Linux exclusively. They would more likely be willing to try some free software on their existing OS, provided that the barrier was sufficiently low. If you’re lucky, that toe-dip will lead to deeper immersion in the world of FOSS, and hopefully also into some appreciation of the philosophy beyond the practical.
If this person has a knowledgeable friend or pays attention to certain information sources, they might get some ideas on what software to use. Applications like Firefox and OpenOffice.org are fairly popular choices these days, but what about less publicised treasures like the GIMP or ClamWin? Sure, there are Web sites that let you search for FOSS equivalents to proprietary applications, but these still require some effort:
Search for the application you want.
Go to the Web site for that application.
Find the download page and pull it down.
Run the installer.
To uninstall, use Windows’ Add/Remove Programs.
These steps need to be performed for each application you wish to install, so can become tiresome very quickly.
How could we simplify this process? What I propose is a software management application. Let’s for the sake of brevity call it FOSS Pack, named after the closest analogue I can think of, Google Pack. The process is intended to be as simple as possible for the end user:
The user downloads a single application (FOSS Pack) and installs it.
When they launch FOSS Pack, they can select from a menu of categorised FOSS applications to install, similar to how a GUI package manager front-end works on (GNU/)Linux.
The user selects the applications they want, and then they are downloaded and installed in batch.
Uninstallation should be as simple as installation, all within FOSS Pack.
Here’s the killer feature: FOSS Pack should be able to scan the user’s system for proprietary applications. These are identified based on an internal list, which also contains information on FOSS alternatives to those applications. Those alternatives are presented for easy download and install.
FOSS Pack contains descriptions of each application, so the user doesn’t have to visit another Web site to understand what they do (although a hyperlink should be provided as well). The option should exist to be able to select only from applications that have Linux versions, as a means of facilitating an OS transition. FOSS pack should also be able to automatically check for updates at regular intervals, and offer to install them when available.
I’m not expecting any of this to be as clean as a real package management system. FOSS Pack will likely have to execute the external installers. Perhaps in the future the applications authors could co-operate with FOSS Pack maintainers to deliver a more seamless experience.
It looks to me that a lot of the pieces to create FOSS Pack are already there, and as is often the case in the FOSS world all that’s required is to tie them together in an appropriate way.
Here’s my report, as co-ordinator of the Linux Australia stand:
Sat 14 to Sun 15 June
Rosehill Racecourse, Sydney
The Education Expo is an annual trades show targeted towards the K-12 educational space. Visitors consist of families and educators. Linux Australia once again had a stand, with volunteers spreading the word about free and open source software.
As always, we were very successful. With each passing year, the level of awareness of FOSS noticeably improves. Whereas at previous shows we would spend much energy expounding the basic concepts of FOSS/Linux, this year most people had either heard of it or were already using FOSS products such as Firefox and OpenOffice.org.
One thing we did differently this year was place more focus on FOSS running on Windows. Our past efforts have been meet with some resistance, as installing a different operating system posed a barrier to entry that many would not surmount. We had plenty of copies of the OpenEducationDisc to distribute, in addition to Fedora, Ubuntu, Edubuntu and Mandriva.
The fact that the NSW Dept of Education is migrating over 40,000 PCs across the state to OpenOffice.org was a useful selling point as well.
Our marketing efforts have been improving with each event. Our message is becoming more refined, and our leaflets are becoming more relevant. On the technical side, FOSS is becoming easier and more accessible, with projects such the aforementioned OpenEducationDisc and Wubi leading the way.
Our Web presence is improving, too. It’s far easier to point a newbie to just one easy-to-remember URL instead of confusing them with a list. In addition, I built an education portal for Linux Australia just in time for the expo.
There were at least two other stands that were FOSS-friendly. In fact, one of the largest stands were demonstrating their Web-based software product on about ten computers, all of which were running Ubuntu. Other stands expressed real interest when approached.