One of the best things about Linux is that it is free, and the saying goes: "Don't look a gift horse in the mouth." However, there are times when one must break with tradition. If you are evaluating Linux for home or business use, you must ask yourself whether Linux provides equally good features and usability as commercial operating systems. Unfortunately, there is no simple answer to this question. Although Linux is very succcessful in the server market, it did not yet make a comparable splash in the desktop market. Here are some reasons why this might be so. Disclaimer: Let me mention a few things to prevent possible misunderstandings: I don't have any affiliations with Microsoft; Linux is installed at my office since 1996; I was one of the founding members of the Thai Linux User Group (TLUG), and I have written a series of Linux articles for an English language newspaper. Antiquated software concepts One of the first critcisms against the Linux OS was brought forward by the man who inspired it, Andrew Tanenbaum. He is the author of the MINIX OS on which early versions of Linux were based. Tanenbaum described the Linux kernel design as backward-oriented, because it contains a monolithic kernel (like the Unices of the 1970s) instead of a microkernel. A monolithic kernel lacks modularisation; it makes the kernel more vulnerable to internal faults and it complicates updates and extensions. This was an early design weakness. Early Linux distributions had additional problems, such as a filesystem limitations, rudimentary support for PnP, lack of scalability and other shortcomings. Many of these limitations have been resolved in the meantime, but the design of the Linux kernel still has a distinctive 1970s flair. Most of the concepts that underlie Unix (and thus Linux) are more than 30 years old. Lack Of User Friendliness In the 1970s, the term "user-friendliness" was used to describe the attitude of computer administrators, rather than the behavior of operating systems. Unix certainly kept with a user-hostile tradition, and unfortunately, so did Linux. Things began to change with the advent of desktop managers. Desktop managers provide graphical user interfaces which allow the user to interact with the system using graphical elements and a mouse. The two most popular Linux desktop managers are KDE and Gnome. They are both based on X-Windows, the lower level graphics engine, which is fairly complex in itself. KDE and Gnome are actually more than mere desktop managers. They are collections of applications, including browsers, calculators, organisers, mail clients and other programs which together provide standard PC functionality. Market Share Of Desktop Managers (Source: www.desktoplinux.com) While KDE and Gnome improved the user-friendliness of Linux a great deal, there are still serious problems. The first problem is that there are two desktop managers, which means that users have to learn two different GUIs instead of one. The second problem is that not all applications make use of them. Some applications use other GUIs, other applications don't have a GUI at all. There are a number of things you can do only with the command line interface. Whenever you have to use the command line interface, user friendliness goes out of the window. Usability User-friendliness is actually just one aspect of usability. The concept of usability is more comprehensive as it is concerned with the overall efficiency with which users employ a tool (such as an operating system) to accomplish particular tasks. There are a number of studies that have been conducted in view of Linux usability. The metrics as well as the results of these studies vary greatly. It should be noted, however, that even those studies which are generally favourable towards the Linux OS bring forward some criticisms. A case in point is the study by Relevantive AG, which considers the usability of Linux "to be nearly equal to Windows XP", but also mentions the following paragraph in the executive summary: "The study also reveals significant problems that are connected with Linux as a desktop system. This mainly consists of the poor wording of programs and interfaces, the lack, at times, of clarity and structure of the desktop interface as well as the menus, and poor system feedback." Inconsistency and poor user interface design has long been a problem in the Linux world. The reason for this is obvious. Linux applications have been developed by a large, disparate group of volunteer programmers. The work of different groups is rarely coordinated in a structured manner. Lack of usability engineering and testing is thus a direct result of the absence of centralised management. Fortunately, the KDE and Gnome teams have recognised this problem and are beginning to address this. Inflexible and complicated file access control If you use Linux on your home PC, file access control probably doesn't matter to you. However, it definitely matters on multi-user systems in business environments. Unfortunately, Linux has inherited the ghastly Unix file permissions system which forces you to learn octal numbers and which has brought despair to many system administrators. For example, you must concern yourself with esoteric questions, such as what happens if a file has read/execute permission and the containing directory has only read permissions. A great number of security problems in Linux systems is caused by incorrect or inappropriate permission settings which escaped the eye of the administrator. What is more, Unix file permissions don't allow you to assign multiple users and groups to file system resources. This means you must treat groups as roles and assign individual users to these groups. If there are two groups who should have access to a resource, you must define a third group containing the union of both groups. In short: it's an administration nightmare, especially on systems with a large number of users. To make things worse, the "root" user can access all file resources regardless of permissions. Since system administration requires "root" privileges, this means your system administrator will have access to all resources, including payrolls, balance sheets, and all sorts of confidential information that may be stored on the Linux computer. Higher system maintenance and administration costs One staple argument that Microsoft keeps throwing at Linux is that its total cost of ownership (TCO) is higher than that of Windows. Microsoft tried to prove its case quite unconvincingly by citing Microsoft-sponsored studies. Is this just FUD or is there some grain of truth in it? Well, it depends. If you install a Linux distribution, such as SuSE, and you are happy with the out-of-the-box functionality, then there are virtually no maintenance costs. However, in real life this is hardly ever the case. Normally you want customise and configure the computer for a particular purpose, and you might also want to install additional software packages and update older programs. Customisation and configuration are easy as long as you can use a GUI, such as SuSE's Yast program to do the job. Likewise, program maintenance is easy as long as you can find the right RPM package. But this is not always the case. Frequently, administrators must resort to editing configuration files or installing software packages manually. For example, if you can't find a vendor-specific binary RPM for your software update, you might have to install the software from a source distribution. This requires a C-Compiler, some basic system administration knowledge, and considerably more time, especially if things go wrong. On average, system configuration and maintenance tasks take therefore longer to accomplish on Linux. This may indeed result in
higher TCO if the amount of maintenance is high. Let's assume that the cost of an inhouse Linux administrator is $35 per hour (which is conservative) and that a Windows volume license costs $105 a piece. This means, if the administrator spends within the lifespan of a Linux installation (say two years) more than three hours longer with updates and maintenance per machine, the license cost advantage is forfeited. Companies that want to deploy Linux should therefore take a close look at the planned usage and maintenance requirements. The impact of administration and maintenance cost on TCO is quite high for a small installation, but decreases with growing number of machines. Little standardisation among Linux distributions Linux has inherited a problem that has plagued the Unix world for decades: a lack of standardisation. This is actually a natural outcome of the Linux history, since the system is designed for maximum configurability and it is used and developed by various groups with disparate goals. The problem lies with the multitude of distributions rather than with the kernel. Linux is packaged by different vendors. These vendors cater to different target user groups ranging from beginners and commercial users to developers and expert users. This results in subtle (or less subtle) differences in system configuration, file system structure, administration tools, all of which can be pretty frustrating for an IT administrator who has to manage different Linux distributions. Of course, it is also confusing for users who switch between distributions. Market share of Linux distributions (Source: www.desktoplinux.com) An average Linux distribution comes with several thousand software packages. This gives users a tremendous amount of choice. Principally, choice is a good thing. Users can select those packages that match their requirements best. But sometimes, choice is a bad thing. First, the time spent on evaluating software products is unproductive time. Second, having multiple softwares that do the same thing is not necessary and -worse- it may result in compatibility problems. Third, it is better to use one software that fulfils all requirements than having to use two or three different softwares, each of which fulfils a part of the requirements. Fourth, time spent on maintenance and updates is proportional to the number of deployed software packages. Applications are difficult to install and maintain The complaint "difficult to install" was uttered by many early Linux users in the 1990s. Fortunately, things have improved in the meantime and most Linux distributions are now as easy to install as the Windows OS. Generally this is also true for application, albeit not for all applications. Frequently it is necessary to install applications from a *.tar.gz source distribution. For example, this may be the case when you install very new versions or exotic software for which there is no installer or RPM available. A software installed in this manner is difficult to maintain, because it often cannot be removed or updated automatically. This means that whenever you install from source you must keep track of all the application files and dependencies by yourself, which again adds to maintenance overhead and TCO. Low backward compatibility Have you ever tried to install new software on a two or three years old Linux installation? Chances are that it won't work. The most common problem is that the new software requires up-to-date libraries, or that it requires an updated version of another RPM package on which it depends. Under these circumstances, you must update parts of the system before you can install the new software. This scenario more likely, if your Linux installation is older. It is partly due to the "release early, release often" practice in the open source world and partly to the idiosyncracies of Linux distributions. Ideally, you would have to update a Linux installation at least every six months, but this will of course increase the TCO to an intolerable level. In a commercial environment, people often go with the same OS installation for years, simply because it is too disruptive and too expensive to keep upgrading the OS. When running Linux, this often means you can use only those applications that the initial installation supports. Not all applications run on Linux Finally, when switching to Linux it is important to make sure that all applications that you or your company need are available under Linux. Unfortunately, this is not always the case. Although one can hardly reproach Linux for the fact that not all application developers support the platform, it is a make-or-break-it criteria. If you depend on a certain software and that software doesn't run on Linux, you could still run the software using an emulation software such as Wine or Crossover, but this carries a high price in complexity and performance. The likelihood that the application doesn't run under Linux is somewhat increased in the graphics, engineering, and games sectors. However, more and more software makers are adopting a cross-platform strategy and make their products available for Linux.