Ten sure-fire ways to crash your IT project

Although I am sure that you don’t need to learn how to crash IT projects, especially not your own, I would like to suggest this topic for three reasons. First, it’s fun. Gloating over the misfortunes of others may not be noble, but it is certainly edifying. Second, ever since Charles Babbage invented the computer it has been crashing. From blue screens of death to lost space probes, crashes seem to be an intrinsic part of the IT field. Third, we can actually learn from mistakes, even if they are not our own.

(1) Ambiguous specifications
(2) Lack of vision and communication
(3) Planning for disaster
(4) Lack of management commitment
(5) Lack of staff involvement
(6) Arrogance and ignorance
(7) Overambition
(8) Do-it-yourself solutions
(9) Silver bullets
(10) Scope creep

The nature of IT projects is intricate, complex, and sometimes unpredictable. The immense number of failures in the IT industry includes projects that get stuck, projects that never end, projects that overshoot budget, projects that do not deliver, and projects that do all of the aforementioned. The latter is by far most common type of failure. Often such occurrences are discreetly swept under the carpet, by both the customer and the contractor. Neither the customer’s nor the contractor’s reputation is likely to gain from it. This condition of secrecy is somewhat unfortunate since the post mortem analysis of a crashed IT project offers some learning potential.

The author of this article worked in the IT field for almost two decades and has seen a fair number of IT projects come down in a less than graceful manner. It would be presumptuous to claim otherwise. Having had the opportunity to observe and analyse the circumstances of ill-fated projects, it was possible to identify the conditions and patterns that spell failure. Unsurprisingly, all of these are management issues rather than technology or financial issues. This insight often stands in contradiction to what the responsible managers and contractors claim. Of course, it is more convenient to blame things on the “wrong” technology, the “wrong” product, “insufficient” budget, and so on.

(1) Ambiguous specifications

The number one killer of IT projects ought to be poor or ambiguous functional specifications. This is so, because specifications stand at the beginning of a project at a time when important course setting decisions are made. IT projects, like living beings, are most vulnerable in their infancy stage, where wrong decisions have their greatest impact. Sloppy specifications inevitably lead to misunderstandings, oversights, false assessments, and eventually fully grown disputes.

There is no better way to screw up a project than having no clear idea of it and put it out to tender. In order to accomplish this, it is best to assign an incompetent employee to write up the functional specifications. Ideally this would be someone with limited IT knowledge and an incomplete understanding of business requirements and work flow. This person should be asked to scribble up a few pages containing computer buzzwords, obscure management talk, and puzzling diagrams.

An invitation of such make-up will doubtlessly attract numerous bids anyway. After all, contractors cannot be too picky about clients. Some of the tenders may give the impression that they are not completely based on guesswork. These are the ones to present to upper management. Upper management will then select the supplier whose logo resembles that of IBM most closely. Should the supplier have the audacity to suggest a requirements analysis at the client’s expense, this should be rejected with utmost steadfastness and with the hint that further specifications will be worked out along the way.

(2) Lack of vision and communication

The deadly effect of poor specifications is closely rivalled by a lack of vision and communication. The dominant theme is: a problem that is not seen, not heard, and not talked about is not a problem. Naturally, this applies to both the client and the contractor. The client who doesn’t communicate his vision is just as harmful to project success as the contractor who conceals problems. It’s one thing not to have a clear vision, and it’s another not to be able to communicate it clearly.

To achieve the most disastrous results, it is recommended to replace clear vision by vague and unspecific goals. These are best communicated in an opaque language that makes references to features and milestones without actually defining what they consist of. Client participation in the various phases of project implementation should be avoided at all costs. After all, the contractor was hired to solve the problem, so he cannot expect the client to be bothered with answering questions. If client involvement is indispensable, then all work should be delegated to a lower rank executive who may be sacked if things go wrong.

(3) Planning for disaster

Planning for disaster results from the opposite attitude. Instead of putting too little attention to project management, it puts too much attention to project management, or rather the administrative details of it. Hence, the disaster planning attitude has a tendency to generate a lot of paperwork. The principal assumption is that things will probably go wrong. The strategy is then to define a plethora of procedures to prevent things from going wrong, or at least to document how things went wrong in view of ensuing legal procedures. Since this strategy is extravagant and costly, it is preferred by large corporations and government organisations.

The trick is to raise the administrative overhead to an insane level. Project members should at least spend two thirds of their time with meetings, filling forms, and generating documentation. Contracts should be no less than 100 pages. They should stipulate all kinds of provisions for the event of premature termination, breach, default, bankruptcy, death, and natural disaster. For this purpose, a lawyer should be hired from day one. Programmers, system analysts, and technicians must seek approval from their superiors, who must seek approval from top management, who must seek approval from their lawyers.

(4) Lack of management commitment

Every manager knows that “commitment is everything”. Because everybody knows it, managers must make it a point never to admit a lack of commitment. A manager typically says, “I am fully committed to the task, but unfortunately I don’t have time to answer your question right now.” That is an excellent excuse, because everybody understands that managers are very busy people. In addition, it is a diplomatic way of saying, “I rather have dinner with my golf mate than pondering mind-numbing techie questions with the geeks from the IT department.” After all, managers have better things to do than racking their brains over bits and bytes.

To develop this technique to its fullest, one must adopt a feudal view of the workplace, where the manager is the sovereign and the IT department is one of the subordinate departments whose primary function is to serve and follow orders. Since every department needs to understand its proper place in the organisation, it is best to let the nerds know that management cannot be bothered with the trivialities of information technology.

Managers may simply claim to be “non-technical” and confer responsibility to one of the lackeys from IT, preferably a yes-man, who is elevated to midlevel management for this purpose. The new midlevel manager, who is now above the common crowd, does well to cover his back and hire an external consultant. The primary function of this consultant is to serve as a scapegoat in case things turn sour. This configuration allows for maximum passing of the buck and leaves the contractor clueless as to who is in charge of the project.

(5) Lack of staff involvement

Lack of staff involvement is a more academic term for ivory tower syndrome. It is common that IT systems are implemented by people who have never worked in the position of those for whom the system is designed. Although this does not in itself constitute a problem, the situation may be creatively exploited in order to steer a project downhill. It’s best to consider the user an abstract entity, an unimportant appendage to the system, and desist completely from involving him in the design of the system. After all, users are replaceable entities. They have low intellects and they should not be allowed to interfere with the magnificent conception of management and engineering.

Interviews, field tests, usability tests, acceptance tests, and pilot schemes are a complete waste of time. A mere user cannot fathom the big picture. He cannot possibly appreciate the complexities of a multi-tiered architecture or a distributed database system. Such work is better left to the technologists, who know perfectly well what the system should look like. Technologists don’t need to know about lowly business issues. The system can be perfected on the drawing board. If the system looks good on the drawing board, it follows that it will work in practice. Once the system is out of the lab and in the real world, trainers may be dispatched to educate the users about the benefits of the system.

(6) Arrogance and ignorance

We have already moved into the wondrous realm of arrogance. No doubt we can further capitalise on this trait to bring virtually any IT project to a screeching halt. The know-it-all IT manager is just one variation of the theme. A know-nothing CEO may have an even more destructive effect, because there’s nothing quite like arrogance combined with ignorance. This person admits to be ignorant of IT, but he considers himself a top-notch business leader. He has seen company X implementing system Y and doubling their profits since. Moreover, system Y is a market leader and it costs a sum with many zeros. The vendors of system Y wear suits and they talk business, unlike the geeks from the IT department. System Y must surely be good.

This leads us to the topic of gullibility. A fair number of company directors become flabbergasted by IT talk. When listening to expressions like “adaptive supply chain network”, “business intelligence platform”, or “data warehouse”, these great leaders just nod in quiet admiration. Yes, these are wonderful things to have. In the course of time, an organisation with gullible leadership might contract consultitis, an affliction that results either from hiring too many consultants, or from hiring a consultant that continuously dazzles the audience with buzzwords and charts, instead of solving actual problems.

One of the best ways to bring down an IT project early, is to hire multiple consultants to solve the same problem. You can bet your bottom dollar on the consultants fighting a turf war over the client’s patronage. Instead of working on the best solution for a given problem, they will work out solutions to demonstrate how inadequate the other consultant’s approach is, for which they will charge $$$ per hour plus expenses.

(7) Overambition

Overambition is one of the most potent poisons for IT projects. It is usually concocted by planning committees who don’t have a realistic idea of complexity and time frames. The leitmotiv is: “Let’s solve all of our problems at once.” The recipe is fairly simple: Draw up a list of all the issues that the organisation wants to automate, from stock optimisation to HR management. Demand that the system should spit out a complete tax return at the push of a button. Throw in the latest hardware and try to use new and unproven application software. Shake. Do not stir.

Alternatively, you may try the following approach: Set artificially tight deadlines for each milestone and include a contractual clause stipulating that the contractor may be burnt at the stake for missing any of them. During project implementation, insist on incorporating many extras into the system. Urge the IT team to respond to each of the manifold itches of the user community. When a day turns into a week, and a week turns into a month, call for an emergency meeting and define new artificially tight deadlines.

(8) Do-it-yourself solutions

Overambition occasionally takes the form of “I did it my way”. The principal motive for the do-it-yourself approach is a distinctive self-image. First you have to assert that your organisation is unique and special. This results in the deeper heroic insight that none of the standard packages fits the needs of your organisation. At this time it is important to maintain self-confidence. Tell yourself how special you are. Don’t listen to advisers recommending to adopt standard software and change your work flow. The rules and procedures of your organisation are sacred. They have existed for decades; they are proven and true. The IT system should adapt itself to your work flow, not vice versa.

The only solution is then to courageously pioneer the field and tailor your own IT system. At this point you are looking forward to an exciting time of requirement analyses, feasibility studies, implementation and test cycles. The IT adventure has begun. On your way to success it is likely that you will wear out a number of IT managers and consultants. Don’t let this distract you. The rewards are great. You will obtain an absolutely unique system that costs a hundredfold of a standard package and takes ages to complete. If this sounds too daring, you should acquire a standard package and customise it beyond recognition. That way you make sure you will have to go through the entire customisation process again at every update cycle.

(9) Silver bullets

Silver bullets are simultaneously popular and infamous. They are infamous, because everybody knows they don’t work. They are popular, because they hold a huge promise and because there is a fine line between methodology and crankiness. “Methodology” simply means the application of a set of defined procedures and work practices. Methodology turns into crankiness at the point where it becomes a mantra. Contrary to a methodology, a mantra is a mere verbal formula. It is often devoid of meaning. But we are proleptic. How exactly does a methodology become a mantra?

Quite simply by chanting it, instead of practicing it. Example: Company X has identified a problem with quality control. Top management has thought long and hard about it and decided that the Six Sigma program is the way to solve it. The promise that Six Sigma holds -less than four defects in one million parts- has enticed the company to splash out on a new enterprise spanning Define-Measure-Analyse-Improve-Control software system. Few people actually understand what it does and what it means, but everybody understands that it’s called Six Sigma. So, everyone joins the chorus and sings: “Define-Measure-Analyse-Improve-Control”. Problems will surely disappear if the phrase is repeated often enough.

The psychology of the silver bullet is based on faith. A problem analysis is usually suggestive of certain approaches and solutions. Some of these solutions may be championed by leaders or highly respected individuals in the organisation. This gives it more credibility. When the solution is finally approved by the CEO, it gains even more credibility. People start to believe in the method. If the experts say it’s right, and the bosses say it’s right, then it must be right. People within the organisation stop questioning the solution. At this point, the solution becomes a silver bullet.

(10) Scope creep

The phenomenon of “scope creep” actually deserves to stand higher in this list, because it is quite an effective IT project crasher. It’s also deceptively simple. Scope creep means uncontrolled change in the definition and scope of a project. The best way to achieve this is to have no clear idea of the project from the outset. Just let your nebulous thoughts guide you and resist any attempt to concretise scope and define clear boundaries. Practice the philosophy that the world is in a steady flux. You need to be flexible. Then, during project implementation, demand the same from the people who implement it, and throw in manifold additions and alterations. Your motto is: I need more features.

Then sit back and watch the spectacle unfold. The project plan gets redrawn after every meeting. Deadlines are constantly missed, and teams get reshuffled. A few months into the project, the initial requirement analysis will look like a cryptic document from ancient times, which has little resemblance to the current state of affairs. Contractors will jump off, consultants will come and go, and the project starts to develop a life of its own.

Database duel – MySQL vs. PostgreSQL

Almost all non-trivial applications need to store data of some kind. If the data has the form of records, or n-tuples, it is typically handled by a relational database management system (RDBMS). Relational databases are conceptually founded on set theory and predicate logic. Data in an RDBMS is arranged in tables whose elements can be linked to each other. Today almost all RDBMS use SQL (structured query language) to implement the relational model. RDBMS with SQL have been in use since the late 1970s. Previously an expensive corporate technology, the first open source RDBMS became available during the late 1990s. Presently PostgreSQL and MySQL are the most popular open source RDBMS.

MySql LogoPostgreSql LogoBoth database systems are widely used for web applications. Although MySQL has a much larger user base (est. 6 million installations by 2005), the growth of PostgreSQL has recently accelerated. The latter came initially out of an academic environment. PostgreSQL was developed at the Berkeley University as a successor of the proprietary INGRES database. Until 1995, it used QUEL instead of SQL. Since version 6.0, the software is maintained and advanced by a team of volunteers and released free under the BSD license. In contrast, MySQL was developed in a commercial environment by the Swedish company TCX Dataconsult, and later by MySQL AB. It started out as a rewrite of the mSQL database and began to acquire more and better features. MySQL is released under a dual licensing scheme (GPL and paid commercial license).

Since the PostgreSQL developers had a head start of almost 10 years, the PostgreSQL database had hitherto more features than MySQL, especially more advanced features, which are desirable in an “enterprise” computing environment. These include advanced database storage, data management tools, information replication, and backup tools. MySQL, on the other hand, used to have an edge over PostgreSQL in terms of speed. It offered better performance for concurrent database access. Lately, this gap is closing, however. PostgreSQL is getting faster while MySQL acquires more enterprise features. The crucial 5.0 release of MySQL in October 2005 has added stored procedures, triggers, and views.

Let’s look at the commonalities first. Both systems are fully relational, using SQL for data definition, data manipulation, and data retrieval. They run on Windows, Linux, and a number of Unices. MySQL also runs on MacOS. Both databases come with a graphical GUI and query builder, backup, repair, and optimisation tools. They offer standard connectors such as ODBC and JDBC, as well as APIs for all major programming languages. Both systems support foreign keys and data integrity, subselects, transactions, unions, views, stored procedures, and triggers. Among the high-end features that both RDBMS offer are ACID-compliant transaction processing, multiple isolation levels, procedural languages, schemas (metadata), hot backups, data loading, replication (as an add-on in PostgreSQL), table spaces for disk storage layout, terabyte scalability, and SSL. MySQL and PostgreSQL also both support storage of geographic information (GIS). PostgreSQL additionally has network-aware data types that recognize Ipv4 and Ipv6 data types.

Now, let’s look at the differences. PostgreSQL is an object-relational database which means that it has object-oriented features, such as user-definable database objects and inheritance. Users can define data types, indexes, operators (which can be overloaded), aggregates, domains, casts, and conversions. PostgreSQL supports array data types. Inheritance in PostgreSQL allows to inherit table characteristics from a parent table. PostgreSQL also has very advanced programming features. In addition to its native procedural language, PL/pgSQL (which resembles Oracle’s PL/SQL), PostgreSQL procedures can be written in scripting languages, such as Perl, PHP. Python, etc., or compiled languages, such as C++ and Java. In contrast, MySQL (since version 5.0) only supports a native scripting language that follows the ANSI 2003 standard.

PostgreSQL/MySQL Comparison Chart

MySql PostgreSql Comparison Chart

The most evident advantage that MySQL offers –in terms of features– are its so-called pluggable storage engines. One may choose from a number of different data storage models, which allows the database administrator to optimise databases for the intended application. For example, a web application that makes heavy use of concurrent reads with few write operations may use the MyISAM storage engine to achieve top performance, while an online booking system may use the InnoDB storage engine for ACID-compliant transactions. Another interesting characteristic of MySQL not found in PostgreSQL is its support for distributed databases, which goes beyond mere database replication. Functionality for distributed data storage is offered through the NDB and FEDERATED storage engines, supporting clustered and remote databases respectively.

There are further differences, of course. MySQL is generally faster than PostgreSQL. It maintains a single process to accept new connections, instead of spawning a new process for each connection like PostgreSQL. This is a great advantage for web applications that connect on each page view. In addition, the MyISAM storage engine provides tremendous performance for both simple and complex SELECT statements. Stability is another advantage of MySQL. Due to its larger user base, MySQL has been tested more intensively, and it has historically been more stable than PostgreSQL.

PostgreSQL has a slight advantage over MySQL/InnoDB for concurrent transactions, because it makes use of Multiversioning Concurrency Control (MVCC), a mechanism found only in enterprise-grade commercial RDBMS. Another advantage of PostgreSQL is its relatively strict compliance with the ANSI 92/99 SQL standards, especially in view of data types. The ANSI SQL implementation of MySQL is more incomplete by comparison. However, MySQL has a special ANSI mode that disregards proprietary extensions.

In view of backup/restore capabilities, MySQL provides somewhat less convenience than PostgreSQL and commercial enterprise RDBMS. Nevertheless, hot backup and restore operations can be performed with both systems. Both PostgreSQL and MySQL/InnoDB allow transactional tables to be backed up simply by using a single transaction that copies all relevant tables. The disadvantage of this method is that it uses a lot of resources, which might compromise system performance.

With MySQL, a better solution is to use the replication mechanism for a continuous backup. PostgreSQL allows recovery from disk failure through point-in-time recovery (PiTR). This method combines file system level backups with a write ahead log, that records all changes to the database. Thus it is possible to recreate snapshots of the database of any point in time. In most cases, a crashed databases can be recovered up to the last transaction before the crash. The PiTR is also convenient for large databases, since it preserves resources.

MySQL Strengths

  • Excellent code stability
  • Excellent performance, fast CONNECT and SELECT
  • Multiple storage engines to choose from
  • Larger user base (thus larger number of applications and libraries)
  • Support for distributed databases
  • Many high-quality GUI tools available
  • Commercial support widely offered

PostgreSQL Strengths

  • Object-oriented features
  • Advanced programming concepts
  • Supports multiple programming languages
  • High ANSI SQL conformance
  • Mature high-end features
  • Robust online backups
  • Very liberal BSD license

In summary, PostgreSQL and MySQL are both mature products with many enterprise level features. They are both catching on with the best commercial RDBMS and are presently making inroads into the high-end market. The philosophy of both RDBMS differs in several ways. Roughly speaking, MySQL is targeted at developers who expect a workhorse database with proven performance, while PostgreSQL is suitable for developers who expect advanced features and programming concepts. MySQL offers more deployment options, whereas PostgreSQL offers more flexibility for developers.