Zend Framework Review

zend-framework.gifEarlier this week, I gave the latest version of the Zend Framework v-1.9.2 another test drive. I had previously dabbled in v-1.7.4 as well as a pre-1.0 incarnation of the framework. I will not repeat listing the whole breadth of its functionality here, since you can find this elsewhere on the Internet. Neither will I present a point-by-point analysis, just the salient points, short and sweet, which you can expect to be coloured by my personal view.

Suffice to say that the ZF (Zend Famework) is based on MVC -you’d never guessed- and it provides functionality for database access, authentication and access control, form processing, validation, I/O filtering, web services access, and a bunch of other things you would expect from a web framework. The first thing to notice is that the framework has grown up and I mean this quite literally from a few megabytes in its early days to a whopping 109 MB (unzipped) distribution package. Only about 21 MB are used by the framework itself; the rest contains demos, tests, and… the dojo toolkit… an old acquaintance, which is optional.

The documentation for the ZF was excellent right from the beginning and it has staid that way. Included is a 1170-pages PDF file, which also bears testimony to the growing size and complexity of the framework. Gone are the days when one could hack together a web application without reading a manual. One of the first things to realise is that ZF is glue-framework rather than a full-stack framework. This means, it feels more like a library or a toolkit. ZF does not prescribe architecture and programming idioms like many other web frameworks do. This appears to fit the PHP culture well, though it must be mentioned that most ZF idioms come highly recommended, since they represent best OO practices.

Another thing that catches the eye is the lack of an ORM component, which may likewise be rooted in traditional PHP culture. If you want object mapping, you would have to code around ZF’s DB abstraction and use Doctrine, Propel, or something similar. Let’s get started with this item.

Database Persistence
ZF provides a number of classes for DB abstraction. Zend_Db_Table implements a table data gateway using reflection and DB metadata. You only need to define table names and primary keys. Zend_Db_Adapter, Zend_Db_Statement and Zend_Db_Select provide database abstraction and let you create DB-independent queries and SQL statements in an object oriented manner. However, as you are dealing directly with the DB backend, all your data definitions go into the DB rather than into objects. Although this matches with the traditional PHP approach, it means that you need to create schemas by hand, which may irritate people who have been using ORM layers, like Hibernate, for years. On the other hand, a full-blown ORM layer likely incurs a significant performance cost in PHP, so maybe the ZF approach is sane.

Fat Controller
Like many other frameworks, ZF puts a lot of application logic into the controller, and this is my main gripe with the ZF. It seems to be the result of the idea that the “model” should concern itself only with shovelling data from the DB into the application and vice versa. A case in point is the coupling between Zend_Form and validation. This leaves you no option, but to put both into the controller. I think that data validation logically belongs to the model, while form generation logically belongs to the view. If you pull this into the middle, it will not only bulge the controller, but it is likely to lead to repetition of validation logic in the long run. That’s why I love slim controllers. Ideally, a controller should do nothing but filtering, URL rewriting, dispatching, and error processing.

MVC Implementation
Having mentioned coupling, it would do ZF injustice to say that things are tightly coupled. Actually, the opposite is the case, as even the MVC implementation is loosely coupled. At the heart you find the Zend_Controller_Front class which is set up to intercept all requests to dynamic content via URL rewriting. The rewriting mechanism also allows user-friendly and SEO-friendly URLs. The front controller dispatches to custom action controllers implemented via Zend_Controller_Action; if non-standard dispatching is required this can be achieved by implementing a custom router interface with special URL inference rules. The Zend_Controller_Action is aptly named, because that’s where the action is, i.e. where the application accesses the model and does its magic. The controller structure provides hooks and interfaces for the realisation of a plugin architecture.

Views
Views are *.phtml files that contain HTML interspersed with plenty of display code contained in the traditional <? ?> tags. It should be possible to edit *.phtml files with a standard HTML editor. The Zend_View class is a thin object from which View files pull display data. View fragments are stitched together with the traditional PHP require() or with layouts. It is also possible to use a 3rd party templating system. Given the <? ?>, there is little to prevent application logic from creeping into the view, except reminding developers that this is an abominable practice punishable by public ridicule.

Layouts
Layouts are a view abstraction. They enable you to arrange the logical structure of page layouts into neat and clean XML. These layouts are then transformed into suitable output (meaning HTML in most cases). As you can probably infer, this takes a second parsing step inside the PHP application, which is somewhat unfortunate, since PHP itself already parses view components. While layouts are optional, they are definitely nice to have. I think it’s probably the best a framework can do given the language limitations of PHP, which only understands the <?php> tag. If the XML capabilities of PHP itself would be extended to process namespaced tags like <php:something>, then one could easily create custom tags and the need for performance-eating 2-step processing would probably evaporate. Ah, wouldn’t it be nice?

Ajax Support
ZF does not include its own Javascript toolkit or set of widgets, but it comes bundled with Dojo and it offers JSON support. The Zend_Json class provides super-simple PHP object serialisation and deserialisation from/to JSON. It can also translate XML to JSON. The Zend_Dojo class provides an interface to the Dojo toolkit and makes Dojo’s widgets (called dijits) play nicely with Zend_Forms. Of course, you are free to use any other Ajax toolkit instead of Dojo, such as YUI, jQuery, or Prototype.

Flexibility
As mentioned, ZF is very flexible. It’s sort of loosely coupled at the design level, which is both a blessing and a curse. It’s a blessing, because it puts few restrictions on application architecture, and it’s a curse, because it creates gaps for code to fall through. A case in point is dependency injection ala Spring. In short, there isn’t much in the way of dependency management, apart from general OO practices of course. Nothing keeps programmers from having dependencies floating around in global space or in the registry. A slightly more rigid approach that enforces inversion of control when wiring together the Zend components would  probably not have hurt.

Overall Impression
My overall impression of the ZF is very good. It is a comprehensive and well-designed framework for PHP web applications. What I like best about it that it offers a 100% object-oriented API that looks very clean and makes extensive use of best OO practices, such as open/closed principle, programming to interfaces, composi
tion over inheritance, and standard design patterns. The API is easy to read and understand. The internals of its implementation likewise make a good impression. The code looks clean and well structured, which is quite a nice change from PHP legacy code. ZF still involves a non-trivial learning curve because of its size. I’ve only had time to look into the key aspects, and didn’t get around to try out more specialised features like Zend_Captcha, Zend_Gdata, Zend_Pdf, Zend_Soap, and web services, and all the other features that ZF offers to web developers. If I had to choose a framework for a new web application, ZF would definitely be among the top contenders.

CSS Grid Layouts Brittle

Recently I changed parts of the HTML template for this blog from CSS divs to tables. Gasp, tables? That's so nineties. Indeed, it is. However, the CSS floating divs were just too brittle. An occasional wide image or wide block of <pre> text would mess up the sidebar badly. Also, the visual results were different in different browsers. The problem puppy was a browser whose name shall not be mentioned (but I can tell you it starts with “I” and ends with “6.0”). Call me old-fashioned, but I think that a table-based design often beats CSS in terms of robustness. Why spend hours testing a complex CSS design if the same job can be accomplished with tables in a few minutes? Tables are especially handy with multiple columns, nested columns and rows, and elastic designs. I would still use CSS in most situations, but you can't beat tables for robust grid layouts.

HTML 5 Preview

Because HTML is at the very core of the World Wide Web, you would expect it to be a mature and refined technology. You would also expect it to provide a flexible platform for Web application development and deployment. As most web developers know, the reality is a bit different. HTML started out as a rather simple SGML application for creating hyperlinked documents. It originally provided a basic set of elements for data viewing, data input, and formatting, whereas it did a little bit of all, yet nothing quite right. While this was practical for whipping up quick-and-dirty websites, it proved to be inadequate for more demanding presentation tasks and fine-tuned user interaction. Thus a whole bunch of supplemental technologies came into being, including CSS, JavaScript, Flash and finally AJAX. You know the story. All of this was quite a messy affair and unfortunately it still is.

While the HTML 4.01 specification has ruled the Web since 1999, the fifth incarnation of HTML was released by the W3C as a working draft earlier this year and is constantly updated since then. The HTML 5 specification is supposed to pave the way for future Web standards. It contains an older draft of W3C dubbed “Web Forms 2.0”, which is W3C’s answer to Web 2.0 and the World Wide Web becoming a platform for distributed applications. Don’t expect anything too radical, though. It neither delivers the hailed “rich GUI” for the Internet, nor will it replace current technologies like AJAX. It is rather designed as a natural extension of the former. It provides good backward compatibility while smoothing some of the rough edges of HTML. No more no less. Let’s have a look at the new features in more detail.

HTML 5 mends the split between the preceding HTML 4 and XHTML 1.0 specifications. Rather than being defined in terms of syntactical rules, it makes the DOM tree its conceptual basis. Thus HTML 5 can be expressed in two similar syntaxes, the “traditional” one and the XML syntax, which both result in the same DOM tree. It goes far beyond the scope of previous specifications, for example by spelling out how markup errors are handled, rather than leaving it to browser vendors, and by specifying APIs for new and old elements. These APIs describe how scripting languages interact with HTML. So, what’s new? The following elements have been dropped from the specification:

  • <acronym>
  • <applet>
  • <basefont>
  • <center>
  • <dir>
  • <font>
  • <frame>
  • <frameset>
  • <isindex>
  • <noframes>
  • <s>
  • <small>
  • <strike>
  • <tt>
  • <u>
  • <xmp>

The following attributes are also goners:

    abbr, accesskey, align, alink, axis, background, bgcolor, border, cellpadding and cellspacing, char, charoff, charset, classid, clear, compact, codebase, codetype, coords, declare, frame, frameborder, headers, height, hspace, language, link, marginheight and marginwidth, name, nohref, noshade, nowrap, profile, rules, rev, scope, scrolling, shape, scheme, size, standby, summary, target, text, type, valuetype, valign, version, vlink, width.

Some of these elements and attributes are quite obscure, so perhaps they won’t be missed. Others like <center>, align, background, and <u> were heavily used in the past, although most of these were already deprecated in HTML 4. The message here is clear: get rid of presentational markup and use CSS instead. The <b>, <i>, <em> and <strong> tags have miraculously survived, however. Although primarily used for text formatting in the past, these tags have been assigned new (non-presentational) semantics to make them respectable. Another conspicuous omission are frames. Yes, frames are gone! But you might breath a sigh of relief to know that <iframe> is still there. Speaking presentational versus semantic HTML, there are quite a few additions to HTML 5 in the latter category. The new semantic tags are designed to aid HTML authors in structuring text and to make it easier for search engine crawlers to parse information in web pages. Here they are (explanations provided by W3C):

  • <section> represents a generic document or application section. It can be used together with h1-h6 to indicate the document structure.
  • <article> represents an independent piece of content of a document, such as a blog entry or newspaper article.
  • <aside> represents a piece of content that is only slightly related to the rest of the page.
  • <header> represents the header of a section.
  • <footer> represents a footer for a section and can contain information about the author, copyright information, et cetera.
  • <nav> represents a section of the document intended for navigation.
  • <dialog> can be used to mark up a conversation in conjunction with the <dt> and <dd> elements.
  • <figure> can be used to associate a caption together with some embedded content, such as a graphic or video.
  • <details> represents additional information or controls which the user can obtain on demand.

Most of these, except the last two, behave like the <div> element, which means their primary use is to identify a block of content that belongs together. Unlike <div> special semantics are associated with each of these elements. Not very exciting? HTML 5 also introduces the following new elements (explanations again from the W3C document):

  • <audio> and <video> for multimedia content. Both provide an API so application authors can script their own user interface, but there is also a way to trigger a user interface provided by the user agent. Source elements are used together with these elements if there are multiple streams available of different types.
  • <embed> is used for plugin content.
  • <mark> represents a run of marked (highlighted) text.
  • <meter> represents a measurement, such as disk usage.
  • <time> represents a date and/or time.
  • <canvas> is used for rendering dynamic bitmap graphics on the fly, such as graphs, games, et cetera.

The <embed> tag supersedes the <applet> and <object> tags. It defines some sort of embedded content that doesn’t expose its internal structure to the DOM tree. The content is typically rendered by a browser plugin. The <audio> and <video> tags are perhaps more interesting, because they make it possible to include multimedia files or streams directly into the HTML document without having to specify a vendor-specific plugin for playing the content. Granted, this could previously be done with the <embed> tag, but the <embed> tag was never a W3C standard and it isn’t supported by all browsers. Obviously, W3C has decided not to follow the mainstream browser implementations and added the <audio> and <video> tags instead, while reserving the <embed> tag for the above named purpose.

Arguably the most exciting additions to HTML 5 -at least from the perspective of a web developer- are the extensions to form processing and data rendering, and the related APIs, such as the editing API or the drag-and-drop API. These additions have previously evolved as a separate standard under the term Web Forms 2.0 and are now incorporated into HTML 5. The <input> element has been enhanced to support several new data types. New elements for user interface components have been defined, similar to those that can be found in GUI applications. For example, HTML 5 finally features the long awaited combo box, a combination of text input and drop-down list, which is a standard component in GUIs for decades. A new <datagrid> element for the interactive/editable representation of data in tabular, list, or tree form, is also present. Here are the new <input> types:

  • type=”datetime”- a date and time (year, month, day, hour, minute, second, fraction of a second) with the time zone set to UTC.
  • type=”datetime-local”- a date and time (year, month, day, hour, minute, second, fraction of a second) with no time zone.
  • type=”date” – a date (year, month, day) with no time zone.
  • type=”month” – a date consisting of a year and a month with no time zone.
  • type=”week” – a date consisting of a year and a week number with no time zone.
  • type=”time”- a time (hour, minute, seconds, fractional seconds) with no time zone.
  • type=”number” – a numerical value.
  • type=”range” – a numerical value, with the extra semantic that the exact value is not important.
  • type=”email”- an e-mail address.
  • type=”url” – an internationalised resource identifier.

The input element also has several new attributes in HTML 5 that enhance its functionality (many of these also apply to other form controls such as <select>, <textarea>, etc.):

  • list=”listname” – used in conjunction with the <datalist> element to create a combobox.
  • required – indicates that the user must provide an input value.
  • autofocus – automatically focuses the control upon page load.
  • form – allows a single control to be associated with multiple forms.
  • inputmode – gives a hint to the user interface as to what kind of input is expected.
  • autocomplete – tells the browser to remember the value when the user returns to the page.
  • min – minimum value constraint.
  • max – maximum value constraint.
  • pattern – specifies pattern constraint.
  • step – specifies step constraint.

The following new elements provide additional user interface components for web applications. The last three are actually not themselves UI components, but components used for scripting the UI through a server side language:

  • <command> represents a command the user can invoke (e.g. toolbar button or icon).
  • <datalist> together with the a new list attribute for input is used to create comboboxes.
  • <output> represents some type of output, such as from a calculation done through scripting.
  • <progress> represents a completion of a task, such as downloading or when performing a series of expensive operations.
  • <menu> represents a menu. The element has three new attributes: type, label and autosubmit. They allow the element to transform into a menu as found in typical user interfaces as well as providing for context menus in conjunction with the global contextmenu attribute.
  • <datagrid> represents an interactive representation of a tree list or tabular data.
  • <ruby>, <rt> and <rb> allow for marking up Ruby annotations.
  • <eventsource> represents a target that “catches” remote server events.
  • <datatemplate>, <rule> and <nest> provide a templating mechanism for HTML.

Let’s briefly look at the new <datagrid> element. <datagrid> usually has a <table> child element, although <select> and <datalist> are also possible to create a tree control. The columns in the datagrid can have clickable captions for sorting. Columns, rows, and cells can each have specific flags, known as classes, which affect the functionality of the datagrid element. Rows are selectable and single cells (or all cells) can be made editable. A cell can contain a checkbox or values that can be cycled. Rows can also be separator rows. Datagrids have a DOM API for updating, inserting, and deleting rows or columns. They also have a data provider API that controls grid data content and editing.

I hope you found this brief overview useful. Please note that the features mentioned here don’t cover everything that is new in HTML 5, but hopefully they catch the essence. The HTML 5 specification is a work in progress; it is still changing and evolving. You can find the latest editor’s draft at http://www.w3.org/html/wg/html5/. An overview of the changes from HTML 4 is available at http://www.w3.org/TR/html5-diff/.

Web Standards: OpenID

OpenID appears to be red hot right now. The adoption of this emerging standard has accelerated in the first half of 2008 as it has entered the radar screen of web developers. Many large organisations, such as Google, Yahoo, IBM, Microsoft and AOL provide OpenID servers. Popular Internet sites, such as LiveJournal, Blogger, Jabber, Drupal and Wikitravel support OpenID logins, and the list is growing. Browser support for OpenID is just around the corner (it’s a feature in Firefox 3 for example). But we are getting ahead of ourselves. What is OpenID and why is it good? Put simply, OpenID solves two common problems; that of having to manage multiple accounts on different websites and that of storing sensitive account information on websites you don’t control. With a single OpenID account you can log into hundreds of different websites. Best of it, you -the user- manage the account information, not the website owner. In more technical terms, OpenID is an open, decentralised, user-centric digital identity framework. I’ll explain this in some more detail.

openid.pngOpenID is an open standard, because nobody owns it and because it’s free of patents and commercial licensing. The standard is maintained by the OpenID foundation; free open source implementations are available in many languages, including Java and PHP. It is decentralised, because it does not depend on a specific domain server. An existing OpenID provider can be rerouted very easily, as we shall see. It is user-centric, because it allows users to manage and control their identity information. Users can identify themselves with a URL they own. While traditional authentication relies on a combination of either a name or an email address and a password, OpenID just requires one item which is either a URL or an XRI (extensible resource identifier). To understand how this works, let’s look at the OpenID protocol and see what an OpenID login procedure actually does.

Let’s assume you already have an OpenID. You can use the same OpenID with any OpenID-enabled website (called the “relying party”) by typing it into the OpenID login field or by letting your browser fill out the field automatically. When you click Submit, the relying party performs a “discovery” procedure to retrieve an authentication URL and subsequently performs an “association” procedure for secure information interchange with the OpenID provider. You are then transported to the authentication URL (called the “OpenID provider”). Normally this is a site like yahoo.com or myopenid.com, but nothing keeps you from running your own OpenID server. After authenticating at the OpenID provider’s secure login page, you are redirected back to the relying party. If the relying party has requested identity information (name, gender, birth of date, etc.), you are prompted which information should be sent to the relying party. Often this information is used to fill in a registration form at the relying party. This information isn’t retrieved for a normal login, but the OpenID protocol supports it. Once you are back at the relying party’s website, the relying party checks whether the authentication was approved and verifies that the information is received correctly from the OpenID provider.

It sounds slightly complicated and by looking at the OpenID specifications you will find that the protocol is indeed quite involved. However, from the users point of view, it is really simple. The user only sees the OpenID login screen. If the user has enabled automatic login at the OpenID provider via a certificate or cookie, the only screen the user sees is the “approve/deny” screen. Logging into a website could not be easier. Only one password needs to be remembered. Registration forms can be pre-filled. Login into specific sites can be fully automated. The best thing is that the user has full control over the OpenID provider thanks to the discovery process. During discovery, the relying party looks for two fields in the header of the web page that it finds at the OpenID URL. In HTML Discovery, there are two fields named openid.server and openid2.provider.Example:

<link rel="openid.server" href="http://www.myopenid.com/server" />

 <link rel="openid2.provider" href="http://www.myopenid.com/server" />

These two entries commonly point to the same end point (the OpenID provider) and are used by version 1 and version 2 of the OpenID protocol. If you have a website, you could simply edit the HTML of your site to add these entries into the HTML header. You could then use the URL of that page as your OpenID. The advantage of using your own web page is that you control the OpenID end point. Hence, you can switch OpenID providers while retaining your OpenID simply by editing your site’s HTML code.

If you are going to incorporate OpenID into your existing website, you might want to think twice about implementing the protocol yourself. It isn’t trivial, and there are already several open source libraries that can be used, e.g. Openid4java if you program in Java, or the JanRain PHP OpenID library which works with PHP 4.3 up. Additional libraries for these two languages, as well as Ruby, Python, C#, C++, and other languages can be found at http://wiki.openid.net/Libraries.

Towards web engineering

Perhaps you have never heard of the term web engineering. That is not surprising, because it is not commonly used. The fact that it is not commonly used is, however, surprising. Very surprising actually. The process of software creation resembles that of website creation and engineering principles are readily applied to the former. A website can be considered an information system. However, there is one peculiarity: the creation of a website is as much a technical as an artistic process.

The design of graphics and text content (presentation) goes hand in hand with the design of data structures, interactive features, and user interface (application). Despite some crucial differences, web development and software development have many features in common. Since we are familiar with software engineering, and since we understand its principles and advantages, it seems sensible to apply similar principles to web development. Thus web engineering.

Before expanding on this thought delving into the details of the engineering process, let’s try to define the term “web engineering”. Here is the definition that Prof. San Murugesan suggested in his 2005 paper Web Engineering: Introduction and Perspectives: “Web engineering uses scientific, engineering, and management principles and systematic approaches to successfully develop, deploy, and maintain high-quality web systems and applications.”

Maturation Stages

It seems that current web development practices rarely conform with this definition. Most websites are still implemented in an uncontrolled code-and-fix style without codified procedures and heuristics. The person responsible for implementation is often a webmaster who is basically a skilled craftsman. The process of crafted web development is quite different from an engineering process. Generally speaking, there is a progression that all fields of industrial production go through from craftsmanship to engineering. The following figure illustrates this maturation process (Steve McConnel, 2003):

Industry Maturity Stages

At the craft stage, production is carried out by accomplished craftsmen, who rely on artistic skill, rule of thumb, and experience to create a product. Craftsmen tend to make extravagant use of resources and production is often time consuming. Therefore, production cost is high.

The commercial stage is marked by a stronger economic orientation. Growing demand, increased competition, and companies instead of craftsmen carrying out work are hallmarks of the commercial phase. At this stage, the production process is systematically refined and codified. Commercial suppliers provide standardised products at competitive prices.

Some of the technical challenges encountered at the commercial stage cannot be solved, because the research and development costs are too high for individual manufacturers. If the economic stakes are high enough, a corresponding science will emerge. As the science matures, it develops theories that contribute to commercial practice. This is the point at which production reaches the professional engineering stage.

On a global level, software production was in the craft stage until the 1970s and has progressed to the commercial stage since then. Though the level of professional engineering is already on the horizon, the software industry has not reached it yet. Not all software producers make full use of available methods, while other more advanced methods are still being researched.

Recent developments in methodology

Web development became a new field of production with the universal breakthrough of the Internet in the 1990s. During the subsequent decade, web development has largely remained in the craft stage. This is now changing, albeit slowly. The move from hand-coded websites to template-based websites, content management systems, and standard application packages signals the transition from craft to commercial stage. Nonetheless, web development has not yet drawn level with the state of art in software development.

Until recently, web development was like a blast from the past. Scripting a web application involved the use of arcane languages for producing non-reusable, non-object-oriented, non-componentised, non-modularised, and in the worst case non-procedural code. HTML, JavaScript, and messy CGI scripts were glued together to create so-called dynamic web pages. In other words, web development was a primitive craft. Ironically, all principles of software engineering were either forgotten or ignored. Thus, in terms of work practices, developers found themselves thrown back twenty years. However, the rapid growth of the Internet quickly made these methods appear outmoded.

During the past 15 years, the World Wide Web underwent a transformation from a linked information repository (for which it was originally designed) to a universal vehicle for worldwide communication and transactions. It has become a major delivery platform for a wide range of applications, such as e-commerce, online banking, community sites, e-government, and others. This transition has created demand for new and advanced web development technologies.

Today, we have some of these technologies. We have unravelled the more obscure aspects of HTML. We have style sheets to encapsulate presentation logic. We have OOP scripting languages for web programming. We have multi-tier architectures. We have more capable componentised browsers. We have specialised protocols and applications for a great variety of purposes.

However, we don’t yet have established methodologies to integrate all these technologies and build robust, maintainable, high-quality web applications. We don’t yet have a universally applicable set of best practices that tells us how to build, say a financial web application, from a server language, a client language, and a database backend. Consequently, there is still a good deal of black magic involved in web development.

If you build a house, you can choose from a number of construction techniques and building materials, such as brick, wood, stone, or concrete. For any of these materials, established methods exist that describe how to join them to create buildings. Likewise, there are recognised procedures to install plumbing, electricity, drainage, and so on. When you build a house, you fit ready-made components together. Normally you don’t fabricate your own bricks, pipes, or sockets.

Unfortunately, this is not so in the field of web development. Web developers do not ubiquitously rely on standard components and established methods. On occasion, they still manufacture the equivalent of bricks, pipes, and sockets for their own purposes. And they fit them together at their own discretion, rather than by following standard procedures. Unsurprisingly, this results in a more time consuming development process and in less predictable product quality.

Having recognised the need for web engineering, a number of question arises. What does web engineering have in common with software engineering? What are the differences? Which software engineering methods lend themselves best to web development? Which methods must be defined from scratch? A detailed examination of all these questions is unfortunately beyond the scope of this article. However, we can briefly outline the span of the field and name those aspects that are most crucial to the Web engineering process.

Web applications are different

Web applications are inherently different from traditional software applications, because they combine multimedia content (text, graphics, animation, audio, video) with procedural processing. Web development comprises software development as well as the discipline of publishing. This includes, for instance, authoring, proofing, editing, graphic design, layout, etc.

Web applications evolve faster and in smaller increments than conventional software. Installing, fixing, and updating a website is easier than distributing and installing a large number of applications on individual computers. Web applications can be used by anyone with Internet access. Hence, the user community may be vast and may have different cultural and educational backgrounds. Security and privacy requirements of web applications are more demanding. Web applications grow in an environment of rapid technological change. Developers must constantly cope with new standards, tools, and languages.

Multidisciplinary approach

Building a large website involves dissimilar tasks such as photo editing, graphics design, user interface design, copy writing, and programming, which in turn requires a palette of dissimilar skills. It is therefore likely that a number of specialists are involved in the creation of a website, each one working on a different aspect of it. For example, there may be a writer, a graphic designer, a Flash specialist and a programmer in the team. Hence, web development calls for a multidisciplinary approach and team work techniques. “Web development is a mixture between print publishing and software development, between marketing and computing, between internal communications and external relations, and between art and technology.” (Powell, 2000)

The website lifecycle

The concept of the website lifecycle is analogous to that of the software lifecycle. Since the field of software engineering knows several competing lifecycle models, there are likewise different approaches to website design. For example, the waterfall model can be used for relatively small web sites with mainly static content:

1. Requirements Analysis
2. Design
3. Implementation
4. Integration
5. Testing and Debugging
6. Installation
7. Maintenance

This methodology obviously fails for larger websites and web applications for the same reason it fails for larger software projects: the development process of large scale projects is incremental and requirements typically evolve with the project. But there is another argument that speaks for a more incremental/iterative methodology. Web applications are much easier to rollout and update than traditional software applications. Shorter lifecycles therefore make good practical sense. The “release early, release often” philosophy of the open source community certainly applies to web development. Frequent releases increase customer confidence, feedback, and avoid early design mistakes.

Categories of web applications

Different types of web applications can be distinguished by functionality (Murugesan, 2005):

Function Examples
Informational Online newspapers, product catalogues, newsletters, manuals, reports, online classifieds, online books.
Interactive Registration forms, customised information presentation, online games.
Transactional Online shopping (ordering goods and services), online banking, online airline reservation, online payment of bills
Workflow Oriented Online planning and scheduling, inventory management, status monitoring, supply chain management
Collaborative work environments Distributed authoring systems, collaborative design tools
Online communities, market places Discussion groups, recommender systems, online market places, e-malls (electronic shopping malls), online auctions, intermediaries

Maintainability

Maintainability is an absolutely crucial aspect in the design of a website. Even small to medium sites can quickly become difficult to maintain. Many developers find it tempting to use static HTML for presentation logic, because it allows for quick prototyping, and because script code can be mixed seamlessly with plain HTML. What is more, presentation logic is notoriously difficult to separate from business and application logic in web applications. However, this approach is only suitable for very small sites. In most other cases, a HTML generator, a template engine, or a content management system will be more appropriate.

Dependency reduction also contributes to maintainability. Web applications have multiple levels of dependencies: internal script code dependencies, HTML and template dependencies, and style sheet dependencies. These issues need to be addressed individually. The reduction of dependencies usually comes at the price of increasing redundancy/complexity. For example, it is more complex to use individual style sheets for different templates, than to use one master style sheet for all. Internal dependencies and complexity levels need to be balanced by an experienced developer.

As in conventional software development, code reusability is paramount to maintainability. Modern object-oriented scripting languages allow for adequate encapsulation and componentisation of web application parts. The problem is that developers do not always make use of these programming techniques, because they require more upfront planning effort. In environments with high economical pressure, there is a tendency to ad-hoc coding which yields quick results, but lacks maintainability.

Scalability

Scalability is one of the friendlier facets of web development, because the web platform is -at least in theory- inherently scalable. Increasing traffic can often be handled by simply upgrading the server system. Nonetheless, developers still need to consider scalability when designing software systems. Two important issues are session management and database management. Memory resource usage is proportional to the amount of session management data. I/O and CPU load is proportional to concurrent database access, thus developers do well to anticipate peak loads and optimise links between different tiers in an n-tier system.

Ergonomics and aesthetics

The look-and-feel of a website, its layout, navigation, colour schemes, menus, etc. make up the ergonomics and aesthetics of a website. This is a more artistic aspect of web development and it should be left to a professional with relevant skills and a good understanding of usability aspects. Software ergonomics and aesthetics should not be an afterthought, but an integral aspect of the development process. These factors are strongly influenced by culture. A website that appeals to a global audience is more difficult to build than a website targeted at a specific group and culture.

Website testing

Website testing comprises all aspects of conventional software testing. In addition, it involves special fields of testing which are specific to web development:

  • Page display
  • Semantic clarity and cohesion
  • Browser compatibility
  • Navigation
  • Usability
  • User Interaction
  • Performance
  • Security
  • Code standards compliance

Compatibility and interoperability

Web applications are generally intended to run on a large variety of client computers. Users might use different operating systems, different font sets, different languages, different monitors, different screen sizes, and different browsers to display web pages. In some cases, web applications do not only run on standard PCs, but also on pocket computers, PDAs, mobile phones, and other devices. While it is nearly impossible to guarantee proper page display on all of these devices, there needs to be a clearly defined scope of interoperability. This scope should spell out at least the browsers, browser versions, languages, and screen sizes that are to be supported.

Bibliography

Steve McConnell (2003). Professional Software Development
San Murugesan (2005). Web Engineering: Introduction and Perspectives
T.A. Powell (1998). Web site engineering: Beyond Web page design