Between suits and nerds

Everybody knows the balloonist joke that epitomises the eternally rocky relationship between I.T. (also known as geeks, nerds, techies, code wrestlers, bit whippers, keyboard pounders) and management (also known as “the suits”). For those who don’t know the joke I have attached it at the end of this article. In Dilbert’s world, the nerds are typically bigheaded, odd, socially inept, and devoid of a sense of humour (or at least nobody understands their humour), whereas the “suits” are typically pushy, mean, overbearing, and of course completely clueless. I am sure that we have all seen one or another Dilbert stereotype incarnation in the real world. Perhaps we are also aware of the adjunctive differences in the Myer-Briggs typology and such. But this article is not about nerds versus suits. It is about a curious profession called project management. Project managers are a sort of hybrid “geek suits”. Technically they are engineers, but they are in the same category as administrators. Organisations that develop computer systems professionally, or organisations large enough to maintain their internal R&D department often have a need for individuals with such qualifications.

What exactly does a project manager do? From the perspective of the nerd department, the project manager (PM) is a “suit” with knowledge. Unlike top management, the project manager cannot be duped easily with buzzwords and technical acronyms. The PM keeps an eye on the work requirements and duties of the engineering staff, so the PM is often viewed with suspicion. The terms “galley whip” and “nerd nanny” come to mind. From the perspective of the “suits”, the PM is simply a sort of Über-nerd who is put in charge of a bunch of regular nerds, so that they don’t play computer games all day and deliver meaningful work results which resemble specifications. Additionally, a project manager comes in handy as a scapegoat when the project flops. This means that the project manager’s primary role is performing a tightrope walk between management and engineering. Since the PM is neither liked by any side, and since the PM is the first to be blamed for any shortcomings in the project, the project manager needs to have a high tolerance for suffering. On the positive side, the PM usually commands a high salary well above regular-nerd level.

Of course, things are different in a small company. Small companies don’t have the hierarchies and corporate politics one finds in large organisations. I have worked in the role of CTO and project manager in my own company for ten years. When I started, there were only 4 people and we built up a team of 16, of whom 12 worked in technical positions. It wasn’t much of a tightrope walk for me, because there wasn’t any superordinate management. Convenient, you might think and you are right. I tended to see project management as orchestration and therefore -to keep with the music metaphor- the project manager as a conductor. Neither conducting nor project management are hard sciences. Sure, there are techniques, best practices, and (to use one of the PM’s favourite terms) “methodologies”, but there is no recipe or “silver bullet” (another favourite) to make an orchestra perform brilliantly or to produce excellent computer systems on time and on budget. – So what really is project management?

Wikipedia offers a reasonable definition of project management, but unfortunately it just scratches the surface. The function of a project manager cannot be summarised easily. It is indeed a bit perplexing. The PM rarely participates directly in the production of a system, but is expected to understand every part of it. The PM also needs a deep understanding of execution, but does not execute. Like a conductor must understand the instruments, the scores, and the orchestra, the project manager must understand the technology, the specifications, and the capabilities of his team. He might be exempted from having to wear a tie, but he still needs management skills, in particular communication and motivation skills. Although the field of project management is fairly well defined, the actual techniques and methods differ widely depending on industry, culture, and deployed technologies. No particular skill set works in every situation. One of the best recognised organisations that certifies project managers, the Project Management Institution (PMI) therefore covers only basic management skills in their programs. Does this make the PM a “suit” after all? Well yes, but a nerdy one.


And here is the joke:

A hot-air balloonist had drifted off course. When he saw a man on the ground he yelled, “Excuse me, can you tell me where I am?”

“Sure”, said the man. “You are in a balloon.”

“Ah, you must work in I.T.,” the balloonist said.

“How did you know?”

“What you told me is technically correct, but of no use at all.”

“And you must work in management,” the man on the ground retorted.

“That’s right.”

“Figures. You don’t know where you are or where you’re going, but you expect me to help. And you’re in the same position you were in before we met, only now it’s my fault”.

Discussion board moderation

Discussion board moderation is a new “profession” and as such it requires a new set of skills. These are not, as many believe, technical skills. Discussion board moderation is primarily a management task and therefore it requires management skills. Since management is not an exact science, the dos and don’ts of discussion board moderation are not chiselled in granite. Yet, there are some important principles which executive and prospective moderators should consider.

Discussion boards (or “forums”) are a newfangled social phenomenon that came about with the Internet. They are meeting places for people who share a common interest about which they like to talk. An online discussion is essentially a written asynchronous conversation between two or more parties who send and receive questions, answers, and comments with a relative delay. These written conversations are much slower than natural conversations, but still faster than a traditional exchange of letters.

The necessity for moderation exists for several reasons. Usually the board operator desires some level of control over the content posted by other participants in order to ensure that it does not violate laws and regulations. In addition, the operator might want to define specific rules for the discussion board that fit the culture of its community. Such rules usually target netiquette and ethical codes. Finally, the board administrator must uphold the technical functioning of the discussion board system and prevent abuse. The attainment of these goals are usually delegated to the moderator(s) who may or may not be the same person as the board operator.

Common Challenges

Discussion boards provide entertainment, support, and fun for many people, but they are not without challenges. A virtual meeting place is a bit like a masked ball where participants enjoy complete anonymity. This can lead to problems. Anonymity, as well as the lack of physical contact, has a tendency to lower the inhibition threshold for socially unacceptable behaviour in some individuals. Common challenges are angry, hateful, obscene, or otherwise inappropriate posts, cross posting, spamming, trolling, DoS attacks, identity theft, and other more technical problems.

Flaming And Flame Wars

Flames are intentionally hostile or insulting messages that usually result from a heated exchange between people holding different opinions. Flames are the most common problem of discussion boards. The flame character of a message is identified by its design to attack the opponent rather than the argument. Hence, flames are ad hominems with a strong emotional impact. Flame wars are prolonged exchanges of flame posts, into which –according to the group dynamics of the community– many individuals may get involved. The affinity to flame wars depends on many factors, such as community behaviour, the nature of topics discussed, as well as moderation practices. Flaming is generally deterring and discouraging to users. Obviously, controversial topics are especially susceptible to flames.

Flames are a rather difficult challenge for the moderator. The most suitable strategy to control flames is to employ non-punitive measures, for example posting placatory comments, appeals to fairness, and conciliation proposals to calm the situation. Diplomacy and humour often work well. Prevention of flames, for example by creating a relaxed and intimate atmosphere, is even better. If this doesn’t work, it may be necessary to remind the opponents of the rules regarding discussion style or to close the thread. If the posted flames are inappropriate it may also be essential to delete offensive passages or posts. Finally, if nothing else works, warning and barring the offending member(s) is the last recourse.


A troll is someone who habitually posts disturbing, inflammatory or nonsensical messages that disrupt the discussion and upset the community. Trolls are basically agitators who provoke and create perturbation by some means, usually by flames, in order to drag attention to themselves or to sabotage the discussion. Trolling is best moderated by confronting the offender directly via the personal message system and by putting the troll user on the pre-moderation list if the discussion board software allows it. The motives for trolling are varied. The troll may be a disgruntled user, someone who feels that the board community “has turned against him”, someone with an underlying psychological problem, or merely someone venting temporary frustration. Trolls can be quite problematic. Persistent trolls should be pre-moderated or banned if pre-moderation is not an option.


Outright spamming has become somewhat rare on discussion boards, since most board software prevents robots from signing up and submitting spam. Yet, there is still the problem of spam posted by human subscribers. Spam contents range from fairly subtle, such as text links to a commercial website, to blatant, such as advertising banners in user signatures and posts. Spammers frequently seek out communities that fit the target group for their products or services. For example, a shop that sells exercise machines might seek out sports communities. Evidently most spammers have an agenda apart from the community and the discussion. Nothing is lost by immediately deleting the spam posts and blocking the offending user and IP address. The situation is somewhat different if a regular member submits an advertising post. In most cases, deletion and a warning issued via PM or the warning system will be sufficient to deal with a one-time transgression.


Cross-posting is the practice of submitting the same message to more than one forum. The intention of the sender is to reach the greatest possible number of readers. The conjunct problem is fragmentation of the ensuing discussion. If the cross-post is targeted at the same community, people also get the impression of being spammed. Cross-posting within the same discussion board is annoying in most cases. The moderator needs to decide whether cross-posting is appropriate or whether to delete duplicate posts. In order to avoid thread fragmentation, duplicate threads may be closed, ideally with an annotation containing a link that leads to one thread singled out to continue the discussion. Alternatively, the administrator may disallow cross-posting within the same discussion board altogether.

Off-Topic Posts

This is a very common problem and it is simultaneously difficult to control. Off-topic (OT) posts arise from the associative nature of subject matters, a characteristic that goes to the root of human language. Getting off on a tangent is all to easy. For example, a discussion about nuclear energy may divert into a discussion about alternative energies, nuclear weapons, or state regulations. In the natural flow of a discussion, minor diversions are common and probably unobjectionable. However, a thread often develops in a contingent way that spawns discussions about multiple topics –often in parallel– which is confusing in the same way a group of people talking at the same time is confusing. Unfortunately, there are no universally valid guidelines for off-topic moderation. It always depends on context and community. In an informal discussion about philosophy OT posts may be of no concern, while in a more formal setting, such as a technical support forum, off-topic contributions may not be allowed at all.

A topic is usually outlined by its thread title and the tagline (short description). If a thread develops an OT sideline, the OT posts may be swapped out into a new thread by the administrator. Many software packages provide a “split thread” operation for this purpose. To what extent OT posts are moderated and how strongly OT contributions are discouraged depends very much on the nature of the discussion board.


“Noise” is text and other content that does either not belong to a discussion or that interrupts the flow of a discussion. For example, long quotations or distracting signatures can be considered noise. If the noise ratio exceeds a certain value, following the discussion becomes visually tiresome. The best strategy to avoid this is by limiting signatures to a certain length (perhaps also to disallow images in signatures) and by discouraging full quotes. Quotations are often useful, even necessary to remind the reader of something previously mentioned and to establish the context for a reply. However, a full quote in which the answerer refers only to a tiny fragment within the quote is confusing and counterproductive. To avoid this, the discussion board software may be configured to discourage full quotes, for example by ergonomic means. Alternatively, the moderator may remind people not to overuse full quotes and edit out noise manually if necessary.

Multiple Identities and Impersonation

Multiple identities result from the same user subscribing several times to the same discussion board. This might happen with technically inexperienced users, users who have lost their password, or users who intentionally create multiple identities. Although most software packages can be configured to prohibit multiple subscriptions with the same email address and/or from the same IP number, subscribers may bypass this mechanism by using different email addresses and IPs. Furthermore, blocking IP addresses is problematic with dynamically assigned IPs. In most cases, multiple subscriptions result in a number of dead accounts which can be deleted after a certain period of inactivity. Other cases are more troublesome, especially those which involve the continued use of multiple identities or impersonation (identity theft). These are deceptive tactics which are not always easy to detect. They a re popular with trolls. An analysis of IP numbers and time stamps of a sequence of posts is often necessary to uncover this form of abuse. Since this is a serious form of abuse, it usually results in account termination and banning.

Denial of Service Attacks, Hacker Attacks

Denial of Service (DoS) Attacks are technical sabotage manoeuvres aimed at disrupting the discussion board service. The most common method is flooding. A flooding robot (a program) sends huge quantities of messages to the board, which then becomes unusable for other users. Most discussion board software packages have basic features to avert such attacks, for example by limiting the number of messages a user can post within a certain period. However, resourceful attackers may find ways to bypass these protection mechanisms. Luckily, DoS are somewhat rare since they require a some technical sophistication, and quite a bit of dedication to the purpose of sabotage. Hacker attacks, on the other hand, are more common. The most ordinary hacker attack is password sniffing on unencrypted connections, and subsequently using passwords for gaining entry to the discussion board system, preferably as a user with administrator privileges. DoS and hacker attacks are serious forms of abuse and should be reported to the service provider and possibly to the law enforcement authorities. Board operators do not always have the technical means to take on such attacks on their own.

Types of Moderation

The Usenet community generally distinguishes between four types of moderation, which are likewise applicable to web-based discussion board systems. These types of moderation differ in the way posts are moderated. They feature different decision and communication flow models.


The most common form of moderation is post-moderation, which means that either a single moderator or a group of moderators reviews contributions once they have been posted. In such a setting, messages ought to reviewed on a regular basis (perhaps daily) and moderators ought to perform editorial tasks as required. Post-moderation is time-consuming if done correctly, because moderators need to review all content and respond to inappropriate content in time. Moderators have full censoring power.


The most restrictive form of moderation is pre-moderation. Again, moderators have full censoring power and need to review every message, but content is reviewed before it goes online, not after. This means that posted messages first go into a waiting queue before they are approved and released by the moderator. The delay that results from this procedure is quite detrimental to discussions, because replies are not available to the community in real time. Since this normally drains the lifeblood from a discussion, pre-moderation is applied only in special situations, where the sensitivity of the topic requires more restrictive action. One example for pre-moderation are the book reviews on

Reactive Moderation

Reactive moderation relies on alerts from members of the discussion board. It moves the task of supervision from the moderator to the audience by offering easily accessible means of reporting problems to the moderator. The moderator only needs to review those areas with reported problems. This form of moderation is quite effective in conjunction with automatic supervision, such as word filters. Its greatest advantage is the reduction of moderation workload associated with the pre- and post-moderation methods. What is more, the legal responsibilities of the operator seem to move primarily to removing questionable content, rather than preventing it being posted. The principal disadvantage of reactive moderation is that not all breaches of house rules and legal provisions might get reported.

Distributed Moderation

The distributed moderation model is even more radical. It dispenses with the concept of a moderator person altogether. Instead it relies on the assumption that a community can collectively decide what is appropriate for itself and what is not. Moderation tasks are thus carried out by the community by means of a voting system. Current implementations of voting systems are often similar to content rating systems. For example, if someone suggests a post for deletion, it takes a number of consenting votes to actually carry out deletion. There are two problems with this approach. First, the community might have different views about “appropriate content” than the board operator. Second, online voting systems are still prone to abuse. Thus distributed moderation is not yet widespread, although some groups, such as and have used it with great success.

Choosing a content management system

If you are playing with the idea of using a content management system (CMS), or if your organisation has already decided to deploy a CMS, then you are facing an important but difficult decision. On the one hand, you know that a CMS is the best way to handle your ever-growing content. On the other hand you are confronted with a bewildering variety of products that leaves you at a complete loss. To make things worse, you know that the choice of a CMS has a far-reaching implications on business processes. Choosing a CMS is not an easy task. It is imperative to select your CMS solution wisely. Deploying an inappropriate product may thwart your project, and it may even be worse than deploying no CMS at all.

In the pioneer days of the Web, there was only one way of publishing information: coding it in HTML and uploading it. The extreme simplicity of this approach was offset by its laboriousness. Developing, updating, and maintaining a medium scale website, say a hundred pages and more, required an insane amount of developer hours, and to make things worse, these were insanely expensive. The software industry soon responded to the dilemma by offering WYSIWIG editors and HTML code generators. With these tools it was possible to design and author websites graphically without having to care about nitty-gritty coding details.

The more advanced editors offered design templates, code snippets, plug-ins, and readymade sequences. They could generate the required set of HTML, JavaScript, and graphic files at a mouse click. These files then had to be uploaded one by one. Although this method is more efficient than manual coding, it still has several drawbacks. Whenever something is changed, pages must be generated and uploaded again, which is time consuming. Sometimes a small change in the design template can mean that hundreds of files need to be replaced. Moreover, the uploaded content is static. This means that it cannot change according to defined parameters, such as user preferences, sort order, date, and so on. Hence, static pages offer limited possibilities for interactive features. This drawback is overcome by the concept of dynamic web pages.

Dynamic pages are generated dynamically at request time. A dynamic web page is not a sequence of HTML tags, but an interpreted computer program (=script) that generates an HTML sequence according to predefined rules. This script is typically executed by a script language interpreter which passes on the resulting HTML sequence to the web server. Dynamical web page scripting unfolds its full potential in combination with an information repository, such as a relational database system, which holds the actual text and media contents. HTML code and information are merged when a user requests a page, and the result changes depending on defined conditions. Today, almost all large websites are based on this principle.

The CMS principle

A content management system (CMS) is a computer program that facilitates the collaborative creation, storage, delivery, distribution, and maintenance of “content”, that is documents, images, and other information. Typically the CMS is a web application and its content is distributed via the Internet or via a private intranet. A CMS exploits the principle of dynamic page generation and adds a further abstraction layer. It streamlines the process of web site creation by automating page generation and by applying templates and predefined features to an entire website. This allows the webmaster to focus on actual content creation and management. CMS either come with a special client software that allows webmasters to edit content and construct web pages, or there is a web-based administrator interface performing this function. The tasks of creating page layout, navigation, scripts and adding modules are left to the CMS. At the heart of every CMS is a database, usually a relational DBMS, which holds the information that constitutes the online content.

Types of CMS

Besides general purpose CMS that facilitate general website creation, there are a number of specialised CMS. For example, Wikis or Wikiwebs are CMS for the collaborative creation of knowledge bases, such as encyclopaedias, travel guides, directories, etc. These systems typically make it easy for anyone to change or add information. Publication CMS (PCMS) allow publishers to deliver massive amounts of content online. They are frequently used by media organisations and publishing houses to create web versions of their print media or broadcasts. Transactional CMS couple e-commerce functions with rich content. As in the case of, they are used for applications that go beyond standard shopping cart functionality. Integrated CMS (ICMS) are systems that combine document management with content management. Frequently, the CMS part is an extension of a conventional document management application. Enterprise CMS (ECMS) are large applications that add a variety of specialised functions to the CMS core, such as document management, team collaboration, issue tracking, business process management, work flow management, customer relationship management, and so on.

It is also possible to define market segments by licensing cost. In this case, we can distinguish the following types:

  1. Free open-source CMS (no licensing cost). These products are typically quite simple and focus on general purpose and publishing functionality. Portals and Wikis also belong to this category.
  2. Boxed solutions (up to $3,000.- USD). These products typically offer solutions that allow non-technical users to create and manage websites collaboratively.
  3. Midrange solutions ($3,001.- to $ 30,000.- USD) commonly have a greatly extended set of functions in comparison to boxed solutions, although scope and philosophy may vary significantly. For example there are web development platforms, as well as powerful ICMS in this category.
  4. High-end solutions ($30,001.- USD up) are usually targeted at the enterprise market. Solutions in this class are often designed to handle massive amounts and types of documents and to automate business processes.
  5. Hosted solutions (for a monthly subscription fee) can be found in all of the three previous categories. Instead of a on-time license cost, there is a monthly fee.

The market is highly fragmented and there is a great variety of products in every segment. The largest segment is general purpose CMS with a multitude of proprietary and open-source, commercial, and non-commercial solutions. The sheer number of products makes a comprehensive review practically impossible. It is vital to narrow down the selection of CMS by compiling a list of requirements beforehand. In particular, the requirements should specify what sort of content you wish to manage, which category of CMS you are likely to prefer, and what should be its key features and capabilities. For example, if you wish to maintain documents and web pages in multiple languages, it is important to look for a software that supports this from the outset. Although many CMS can be adapted to handle multilingual content, they do this in different ways. Some may be unsatisfactory to you.

CMS Selection Checklist

Sometimes it is useful to use checklists to determine product features. These can help to narrow down the number of products you might want to review more closely.

Commercial checklist

  • Availability
  • Price
  • Licensing model
  • Total cost of ownership

Technical checklist

  • Supported operating systems
  • Supported web servers
  • Supported browsers
  • Supported database systems
  • Required hardware
  • Programming language
  • System architecture

Functionality checklist

  • Content organisation model (hierarchic/segmented, centralised/decentralised, etc.)
  • Content generation features (editors, spell checkers, etc.)
  • Content attributes (author, publication date, expiry date, etc.)
  • Content delivery (presentation, layout, visualisation, etc.)
  • Content management (moving, deleting, archiving, etc.)
  • Content versioning (multilingual, multiple versions)
  • Media management (images, animations, audio, etc.)
  • Link management (automatic navigation, link consistency checks, etc.)
  • User management (authentication, security, granularity of access privileges, etc.)
  • Template management (design, installation, maintenance)
  • Features for searching and browsing content
  • Special features (email forms, feedback lists, discussion boards, etc.)
  • Extensibility (plug-ins, add-ons, third party modules, etc.)

Integration checklist

  • Integration with external text editors
  • Integration with external image and media editors
  • Integration with external data
  • Integration with static website content
  • Integration with legacy systems

Helpful websites

There are a number of websites that offer CMS comparisons, descriptions, tests, and reviews. These may be helpful in the second phase of selection. After requirements have been gathered and desired key features have been defined, these websites assist prospects in determining concrete products for closer review.


The final step in CMS selection is to review and evaluate concrete products. This step may be fairly labour-intensive. Vendors must be invited. A trial version of the product must be obtained. It must be installed and configured properly. Its basic functions and features must be learned. Test data must be entered. Meetings and group reviews must be scheduled and held. The whole process may have to be repeated with a number of different products. This may sound off-turning, but the do-it-yourself approach is really the only way to ensure that you get the right product.

Management involvement

As always, management involvement is crucial. The decision making process cannot be completely delegated to IT, because in the end, the job of the CMS is to automate a business function, not an IT function. Depending on the nature of your content, it may be a marketing function, an R&D function, a human relation function, or even a production function as in the case of publishing houses. Depending on how you use the CMS it may also have a large impact on organisational communication. Therefore, management should be involved in phase one and three of the selection process. At the very least, management should review and approve the requirements specification and join the final review meetings. Often it is important to get an idea of the “look and feel” of a product beforehand.

After the acquisition

Once the chosen CMS is acquired and properly installed, people may create and publish content as they wish and live happily ever after. Well, not quite. If users are happy with the system, there may be a quick and uncontrolled growth of content. If they aren’t, the system may gather dust and the electronic catalogues may remain empty. The usual approach to regulate this is to put a content manager in charge of the system. The role of the content manager is different from that of a traditional webmaster. While a webmaster needs to be very tech-savvy, a content manager merely needs to be computer literate. The main responsibility is content editing and organisation. Hence, the role of a typical content manager is that of an editor and librarian.

Long term perspectives

Proprietary content management systems are currently expensive, especially in the enterprise (ECM) segment. The overall market will remain fragmented in the medium term. In the long term, however, the CMS market is likely to be commoditised. This means free open-source systems are likely to dominate the market. Currently open-source products are encroaching the “boxed solution” and “midrange” market. There are even a number of powerful open-source CMS with web delivery focus, such as typo3, which are comparable to proprietary high-performance products. As open-source solutions get more powerful, this trend is likely to continue. Extensibility, a large user base, and commercial support will be crucial for a system to assume a market leader position. At this moment, however, there are no candidates in sight.

Ten sure-fire ways to crash your IT project

Although I am sure that you don’t need to learn how to crash IT projects, especially not your own, I would like to suggest this topic for three reasons. First, it’s fun. Gloating over the misfortunes of others may not be noble, but it is certainly edifying. Second, ever since Charles Babbage invented the computer it has been crashing. From blue screens of death to lost space probes, crashes seem to be an intrinsic part of the IT field. Third, we can actually learn from mistakes, even if they are not our own.

(1) Ambiguous specifications
(2) Lack of vision and communication
(3) Planning for disaster
(4) Lack of management commitment
(5) Lack of staff involvement
(6) Arrogance and ignorance
(7) Overambition
(8) Do-it-yourself solutions
(9) Silver bullets
(10) Scope creep

The nature of IT projects is intricate, complex, and sometimes unpredictable. The immense number of failures in the IT industry includes projects that get stuck, projects that never end, projects that overshoot budget, projects that do not deliver, and projects that do all of the aforementioned. The latter is by far most common type of failure. Often such occurrences are discreetly swept under the carpet, by both the customer and the contractor. Neither the customer’s nor the contractor’s reputation is likely to gain from it. This condition of secrecy is somewhat unfortunate since the post mortem analysis of a crashed IT project offers some learning potential.

The author of this article worked in the IT field for almost two decades and has seen a fair number of IT projects come down in a less than graceful manner. It would be presumptuous to claim otherwise. Having had the opportunity to observe and analyse the circumstances of ill-fated projects, it was possible to identify the conditions and patterns that spell failure. Unsurprisingly, all of these are management issues rather than technology or financial issues. This insight often stands in contradiction to what the responsible managers and contractors claim. Of course, it is more convenient to blame things on the “wrong” technology, the “wrong” product, “insufficient” budget, and so on.

(1) Ambiguous specifications

The number one killer of IT projects ought to be poor or ambiguous functional specifications. This is so, because specifications stand at the beginning of a project at a time when important course setting decisions are made. IT projects, like living beings, are most vulnerable in their infancy stage, where wrong decisions have their greatest impact. Sloppy specifications inevitably lead to misunderstandings, oversights, false assessments, and eventually fully grown disputes.

There is no better way to screw up a project than having no clear idea of it and put it out to tender. In order to accomplish this, it is best to assign an incompetent employee to write up the functional specifications. Ideally this would be someone with limited IT knowledge and an incomplete understanding of business requirements and work flow. This person should be asked to scribble up a few pages containing computer buzzwords, obscure management talk, and puzzling diagrams.

An invitation of such make-up will doubtlessly attract numerous bids anyway. After all, contractors cannot be too picky about clients. Some of the tenders may give the impression that they are not completely based on guesswork. These are the ones to present to upper management. Upper management will then select the supplier whose logo resembles that of IBM most closely. Should the supplier have the audacity to suggest a requirements analysis at the client’s expense, this should be rejected with utmost steadfastness and with the hint that further specifications will be worked out along the way.

(2) Lack of vision and communication

The deadly effect of poor specifications is closely rivalled by a lack of vision and communication. The dominant theme is: a problem that is not seen, not heard, and not talked about is not a problem. Naturally, this applies to both the client and the contractor. The client who doesn’t communicate his vision is just as harmful to project success as the contractor who conceals problems. It’s one thing not to have a clear vision, and it’s another not to be able to communicate it clearly.

To achieve the most disastrous results, it is recommended to replace clear vision by vague and unspecific goals. These are best communicated in an opaque language that makes references to features and milestones without actually defining what they consist of. Client participation in the various phases of project implementation should be avoided at all costs. After all, the contractor was hired to solve the problem, so he cannot expect the client to be bothered with answering questions. If client involvement is indispensable, then all work should be delegated to a lower rank executive who may be sacked if things go wrong.

(3) Planning for disaster

Planning for disaster results from the opposite attitude. Instead of putting too little attention to project management, it puts too much attention to project management, or rather the administrative details of it. Hence, the disaster planning attitude has a tendency to generate a lot of paperwork. The principal assumption is that things will probably go wrong. The strategy is then to define a plethora of procedures to prevent things from going wrong, or at least to document how things went wrong in view of ensuing legal procedures. Since this strategy is extravagant and costly, it is preferred by large corporations and government organisations.

The trick is to raise the administrative overhead to an insane level. Project members should at least spend two thirds of their time with meetings, filling forms, and generating documentation. Contracts should be no less than 100 pages. They should stipulate all kinds of provisions for the event of premature termination, breach, default, bankruptcy, death, and natural disaster. For this purpose, a lawyer should be hired from day one. Programmers, system analysts, and technicians must seek approval from their superiors, who must seek approval from top management, who must seek approval from their lawyers.

(4) Lack of management commitment

Every manager knows that “commitment is everything”. Because everybody knows it, managers must make it a point never to admit a lack of commitment. A manager typically says, “I am fully committed to the task, but unfortunately I don’t have time to answer your question right now.” That is an excellent excuse, because everybody understands that managers are very busy people. In addition, it is a diplomatic way of saying, “I rather have dinner with my golf mate than pondering mind-numbing techie questions with the geeks from the IT department.” After all, managers have better things to do than racking their brains over bits and bytes.

To develop this technique to its fullest, one must adopt a feudal view of the workplace, where the manager is the sovereign and the IT department is one of the subordinate departments whose primary function is to serve and follow orders. Since every department needs to understand its proper place in the organisation, it is best to let the nerds know that management cannot be bothered with the trivialities of information technology.

Managers may simply claim to be “non-technical” and confer responsibility to one of the lackeys from IT, preferably a yes-man, who is elevated to midlevel management for this purpose. The new midlevel manager, who is now above the common crowd, does well to cover his back and hire an external consultant. The primary function of this consultant is to serve as a scapegoat in case things turn sour. This configuration allows for maximum passing of the buck and leaves the contractor clueless as to who is in charge of the project.

(5) Lack of staff involvement

Lack of staff involvement is a more academic term for ivory tower syndrome. It is common that IT systems are implemented by people who have never worked in the position of those for whom the system is designed. Although this does not in itself constitute a problem, the situation may be creatively exploited in order to steer a project downhill. It’s best to consider the user an abstract entity, an unimportant appendage to the system, and desist completely from involving him in the design of the system. After all, users are replaceable entities. They have low intellects and they should not be allowed to interfere with the magnificent conception of management and engineering.

Interviews, field tests, usability tests, acceptance tests, and pilot schemes are a complete waste of time. A mere user cannot fathom the big picture. He cannot possibly appreciate the complexities of a multi-tiered architecture or a distributed database system. Such work is better left to the technologists, who know perfectly well what the system should look like. Technologists don’t need to know about lowly business issues. The system can be perfected on the drawing board. If the system looks good on the drawing board, it follows that it will work in practice. Once the system is out of the lab and in the real world, trainers may be dispatched to educate the users about the benefits of the system.

(6) Arrogance and ignorance

We have already moved into the wondrous realm of arrogance. No doubt we can further capitalise on this trait to bring virtually any IT project to a screeching halt. The know-it-all IT manager is just one variation of the theme. A know-nothing CEO may have an even more destructive effect, because there’s nothing quite like arrogance combined with ignorance. This person admits to be ignorant of IT, but he considers himself a top-notch business leader. He has seen company X implementing system Y and doubling their profits since. Moreover, system Y is a market leader and it costs a sum with many zeros. The vendors of system Y wear suits and they talk business, unlike the geeks from the IT department. System Y must surely be good.

This leads us to the topic of gullibility. A fair number of company directors become flabbergasted by IT talk. When listening to expressions like “adaptive supply chain network”, “business intelligence platform”, or “data warehouse”, these great leaders just nod in quiet admiration. Yes, these are wonderful things to have. In the course of time, an organisation with gullible leadership might contract consultitis, an affliction that results either from hiring too many consultants, or from hiring a consultant that continuously dazzles the audience with buzzwords and charts, instead of solving actual problems.

One of the best ways to bring down an IT project early, is to hire multiple consultants to solve the same problem. You can bet your bottom dollar on the consultants fighting a turf war over the client’s patronage. Instead of working on the best solution for a given problem, they will work out solutions to demonstrate how inadequate the other consultant’s approach is, for which they will charge $$$ per hour plus expenses.

(7) Overambition

Overambition is one of the most potent poisons for IT projects. It is usually concocted by planning committees who don’t have a realistic idea of complexity and time frames. The leitmotiv is: “Let’s solve all of our problems at once.” The recipe is fairly simple: Draw up a list of all the issues that the organisation wants to automate, from stock optimisation to HR management. Demand that the system should spit out a complete tax return at the push of a button. Throw in the latest hardware and try to use new and unproven application software. Shake. Do not stir.

Alternatively, you may try the following approach: Set artificially tight deadlines for each milestone and include a contractual clause stipulating that the contractor may be burnt at the stake for missing any of them. During project implementation, insist on incorporating many extras into the system. Urge the IT team to respond to each of the manifold itches of the user community. When a day turns into a week, and a week turns into a month, call for an emergency meeting and define new artificially tight deadlines.

(8) Do-it-yourself solutions

Overambition occasionally takes the form of “I did it my way”. The principal motive for the do-it-yourself approach is a distinctive self-image. First you have to assert that your organisation is unique and special. This results in the deeper heroic insight that none of the standard packages fits the needs of your organisation. At this time it is important to maintain self-confidence. Tell yourself how special you are. Don’t listen to advisers recommending to adopt standard software and change your work flow. The rules and procedures of your organisation are sacred. They have existed for decades; they are proven and true. The IT system should adapt itself to your work flow, not vice versa.

The only solution is then to courageously pioneer the field and tailor your own IT system. At this point you are looking forward to an exciting time of requirement analyses, feasibility studies, implementation and test cycles. The IT adventure has begun. On your way to success it is likely that you will wear out a number of IT managers and consultants. Don’t let this distract you. The rewards are great. You will obtain an absolutely unique system that costs a hundredfold of a standard package and takes ages to complete. If this sounds too daring, you should acquire a standard package and customise it beyond recognition. That way you make sure you will have to go through the entire customisation process again at every update cycle.

(9) Silver bullets

Silver bullets are simultaneously popular and infamous. They are infamous, because everybody knows they don’t work. They are popular, because they hold a huge promise and because there is a fine line between methodology and crankiness. “Methodology” simply means the application of a set of defined procedures and work practices. Methodology turns into crankiness at the point where it becomes a mantra. Contrary to a methodology, a mantra is a mere verbal formula. It is often devoid of meaning. But we are proleptic. How exactly does a methodology become a mantra?

Quite simply by chanting it, instead of practicing it. Example: Company X has identified a problem with quality control. Top management has thought long and hard about it and decided that the Six Sigma program is the way to solve it. The promise that Six Sigma holds -less than four defects in one million parts- has enticed the company to splash out on a new enterprise spanning Define-Measure-Analyse-Improve-Control software system. Few people actually understand what it does and what it means, but everybody understands that it’s called Six Sigma. So, everyone joins the chorus and sings: “Define-Measure-Analyse-Improve-Control”. Problems will surely disappear if the phrase is repeated often enough.

The psychology of the silver bullet is based on faith. A problem analysis is usually suggestive of certain approaches and solutions. Some of these solutions may be championed by leaders or highly respected individuals in the organisation. This gives it more credibility. When the solution is finally approved by the CEO, it gains even more credibility. People start to believe in the method. If the experts say it’s right, and the bosses say it’s right, then it must be right. People within the organisation stop questioning the solution. At this point, the solution becomes a silver bullet.

(10) Scope creep

The phenomenon of “scope creep” actually deserves to stand higher in this list, because it is quite an effective IT project crasher. It’s also deceptively simple. Scope creep means uncontrolled change in the definition and scope of a project. The best way to achieve this is to have no clear idea of the project from the outset. Just let your nebulous thoughts guide you and resist any attempt to concretise scope and define clear boundaries. Practice the philosophy that the world is in a steady flux. You need to be flexible. Then, during project implementation, demand the same from the people who implement it, and throw in manifold additions and alterations. Your motto is: I need more features.

Then sit back and watch the spectacle unfold. The project plan gets redrawn after every meeting. Deadlines are constantly missed, and teams get reshuffled. A few months into the project, the initial requirement analysis will look like a cryptic document from ancient times, which has little resemblance to the current state of affairs. Contractors will jump off, consultants will come and go, and the project starts to develop a life of its own.

Thailand’s human resource slump

ComWorld Expo Thailand

This year’s Comworld show, which was staged at the Siam Paragon shopping centre in Bangkok earlier this month, received a fairly reserved welcome from Thailand’s ICT Minister Sithichai Pokaiudom. The minister who had been newly appointed by Thailand’s “transitional” government, was one of the key note speakers at the opening of the exhibition on February 8th. In his speech he said that it is wrong for Thai people to admire modern technology which was not developed by Thais. He stated: “It is a fake development because the country is now getting worse as almost everything at the exhibition here is imported and nothing is made by Thais.”

The minister said that the term “Thai computers” should mean Thai made computer components and Thai design, not just imported components assembled in Thailand. “It is sad that today we cannot find any Thai products. The technology show here provides foreigners an opportunity to take money from Thai people. To be truly proud, it should show technology that has been developed by Thais,” the ICT Minister said.

As a foreigner living in Thailand and working in Thailand’s IT industry for more than 10 years, I find the minister’s statements curious. First of all, the opening of an IT sales fair seems to be an awkward occasion for such criticism. Second, the implicit demand that all parts of a complex technological product such as a computer should be domestic made strikes me as fairly unrealistic. I wonder if any country in the world produces a 100% domestic made computer. Finally, the protectionist anti-foreigner undertones in this speech, which we have recently heard more often from this government, are somewhat worrying.

The truth is that Thailand does produce quite a few components from hard disks to chips which are used in today’s computer products all over the world. Seagate, Western Digital, Microchip all maintain manufacturing facilities in Thailand. However, the engineering that goes into making these components is almost exclusively imported. The research and design necessary to develop competitive high-tech products is something that Thailand cannot currently provide. I certainly agree with the minister that it would be very desirable for Thailand to have its own engineering force to compete with the likes of Intel and Seagate in future. Alas, this is presently not the case, and one should perhaps examine the reason for this.

The reason is that Thailand lacks several important prerequisites for becoming a major player in information technology, a situation which is unfortunately not new and which has never been remedied by the Thai government. One of the major causes is the poor standard of IT education in Thailand. An information technology degree is still a rarity, and IT students do not receive the same standard of education as in other countries. From my own experience, I can say that the majority of computer science graduates from Thai universities cannot be employed productively in a commercial environment. It takes one or two years of training on the job until they stop being a cost factor and start to perform in a way you would expect them to perform as fresh graduates.

Perhaps the most important factor is culture. Thailand does not seem to provide a culture that fosters research and development in the high-tech industry. If you speak to local engineers, it appears that the majority is perfectly happy to apply existing technologies instead of inventing new ones. This may be the result of an education system that rewards rote learning and reproduction instead of creativity and “thinking outside the box”. As a consequence, the innovation rate of Thai companies is low and the Thai engineering industries have staid comparatively small and ineffectual.

Finally, there is a consistent lack of government support. IT was probably never a priority topic on the agenda of the Thai government, which is understandable, since the country has many other important issues to solve. However, the multitude of enthusiastic announcements of previous governments to support the national IT industry is in stark contrast to what has actually been done. In spite of proclamations of Dr. Sithichai’s predecessors to make the Thai IT internationally competitive, surprisingly little has happened. So, instead of reproaching the industry for building computers from imported components, it would be much more interesting to hear from the ICT minister how he thinks the situation can be changed for the better and how he plans to implement actions to that end. Alas, there was silence.