Fixing a broken UEFI Grub boot loader

I recently installed Arch Linux on a new Dell Laptop with hybrid GPUs and UEFI. The firmware did not allow switching to legacy MBR boot mode, which I normally prefer because it is easier to install, so I had no choice in this case. To aggravate things, the newly created UEFI partition wasn’t recognised by the firmware and the computer didn’t boot. If you are in a similar situation, the following instructions might help.

I assume that you have created an UEFI partition of at least 300 MB using type “EFI SYSTEM” and GUID partition table on /dev/sdb1. Your actual device may differ, for example it could be /dev/sda1 or /dev/sda2. I also assume that you have mounted the the UEFI partition as follows:

# mkdir /boot/efi
# mount /dev/sdb1 /boot/efi

Furthermore, I assume that you have installed Grub successfully with:

# grub-install --target=x86_64-efi --bootloader-id=GRUB --efi-directory=/boot/efi
# grub-mkconfig -o /boot/grub/grub.cfg

Normally this should leave you with a bootable system. In my case, however, I had old “ghost” bootloader entries left which were invalidated by recreating the UEFI partition. I first had to manually remove these. For this purpose, I used efibootmgr. Invoking the command without parameters shows a list of boot loader entries. For example:

# efibootmgr
BootCurrent: 0007
Timeout: 0 seconds
BootOrder: 0002,0003,0004,0005,0006,0007
Boot0000* grub_arch
Boot0001* GRUB
Boot0002* Preinstalled
Boot0003* Diskette Drive
Boot0004* USB Storage Device
Boot0005* CD/DVD/CD-RW Drive
Boot0006* Onboard NIC
Boot0007* UEFI: SK hynix SC311 SATA 256GB, Partition 1

Let’s assume that 0002 is an invalid entry. You can delete it with:

# efibootmgr -b 2 -B

Not that 2 is written without preceding zeros. You can also change the boot order or activate and deactivate single boot loaders with the same command. If this should not work, it can also be done using the firmware user interface in most cases.

Before I got my system to boot from my SSD, I executed two more steps to help the computer locating the Grub bootloader. First, I copied the bootloader itself to an alternative location. Keep in mind that /boot/efi still refers to the mounted UEFI partition (/dev/sdb1 in my case).

# mkdir /boot/efi/EFI/BOOT
# cp /boot/efi/EFI/GRUB/grubx64.efi /boot/efi/EFI/BOOT/BOOTX64.EFI

Second, I created a boot startup script:

# vi /boot/efi/startup.nsh

Of course, you can use another editor such as nano if you prefer. The script contains only one line:

bcf boot add 1 fs0:\EFI\GRUB\grubx64.efi “Grub Bootloader”

After saving, unmounting, and rebooting the system, I was able to boot from the SSD.

Getting Linux rolling

A few weeks ago I upgraded my Ubuntu from 16.04 to 18.04.1. I wanted to do the upgrade earlier, but reports of compatibility issues kept me waiting for the first maintenance release. The upgrade went trouble-free as expected. I think Canonical did a great job on the installer and the Gnome customisations. However, as with previous Ubuntu upgrades, there were quite a few post-installation issues.

The problem I noticed first was that the update had somehow jumbled the hotkey mapping. OK, no problem to fix that manually. Next, I couldn’t connect to the MySQL server any longer as the update had replaced the config file. This was also not a big deal, because the update process saves all existing configs. I simply had to restore the relevant lines. A bit trickier was the PHP installation. It seems that the old 7.0 package was left intact and the 7.2 version was installed only partly by the upgrade. I am not able to add modules to the 7.0 package any longer, since the PPA repositories changed.

I also encountered a few UI problems with scrolling and text pasting. For a week, I could not scroll terminal output back until I found a fix for this problem. Copying and pasting text is very slow sometimes. Could have to do with the switch from Unity to the Gnome shell. I wasn’t able to figure it out yet. All in all, a fresh installation would have been cleaner and less troublesome. However, I don’t want to go through that, as it would force me to reinstall all applications and reconfigure my Docker-based development setup, which surely takes more than a day.

With Ubuntu, or Debian, or in fact any other non-rolling Linux distro, major updates are released at least once a year. Even with LTS releases, you have to update every two years. Most packages are quite outdated at that time. In software development, we are striving to shorten release cycles ideally to a continuous deployment model. Therefore, it becomes more important to keep libraries and tools up-to-date. Simultaneously, deployments and tool chains are increasing in complexity. Hence, reinstalling environments from scratch becomes more cumbersome.

For these reasons, I decided to migrate to a rolling Linux distribution. Ubuntu is great and I really like the ease of installation and the fact that it is stable and well supported. But perhaps it’s time to try out something new. The obvious choice for me would be Arch Linux, so I installed Arch and a few of its derivates in virtual machines to get a feel for it. I am going to pick one of them and maintain the Vbox for a while before installing it on bare metal. As of now, I am not sure whether Arch is stable enough to function as a primary environment for daily work.

The Arch Linux base distro is certainly special. Its installation process is entirely manual and the resulting image is quite Spartan. You have to bootstrap everything by hand and even for someone experienced it takes a few hours until everything, including a graphical desktop environment, is configured and running. The advantage is that the system is completely customisable and can be configured exactly to your needs. Kind of nice.

I’ve also tried Manjaro and Antergos, both Arch-based distributions. They are significantly easier to install and also provide more convenience for configuration. For example, Manjaro has a nice graphical tool for managing kernel versions and Antergos offers six or seven different desktop environments out of the box. Like Arch, they are based on the Pacman installer. Although there are only about 11,000 packages in the Arch repository, the AUR (Arch User Repository) adds another 47,000 packages which is on par with the biggest Linux distributions.

The Vim Experiment

Though I started my career with text-based editors like vi, Emacs and Brief, I have been using IDEs for a very long time. It began with products with “Visual” in their name. Since then I moved on to Eclipse for Java programming, Netbeans for web/PHP development, Webstorm for Javascript and the list goes on. So far I have not looked back and never questioned the convenience and productivity that comes with contemporary IDEs. Until last month.

Someone suggested to give vim a try. Say what? 1970s technology instead of a full-featured IDE? Well, first of all vim must not be confused with vi. The latter is significantly older whereas vim was originally developed in the 1990s and is still in active development. Anyone who has ever worked with Linux is probably familiar with vim. It can be found on almost any *nix computer and often used for quick-and-dirty editing of configuration files. Perhaps it is not the most popular editor, because to the majority of people accustomed to non-modal editing, the modal interface of vim feels a bit foreign. In addition, vim has no point-and-click interface. It can only be used effectively by learning a great number of keyboard shortcuts and commands.

vim 8 on Ubuntu 16.04

So why vim? To put it simply, its curse is also its greatest promise. If your hands do not have to move between keyboard and mouse all the time, you can accomplish things faster and with greater ease. Drew Neil, the author of “Practical Vim” speaks of “editing at the speed of thought” and “high precision code editing”. There is also less potential for carpal tunnel syndrome with your hands resting on the keyboard. What is more, vim features a scripting language and a plugin system which makes it highly configurable and extensible. So the question is: can vim hold up a candle to modern IDEs or even beat them in terms of productivity?

I have decided to find out and prescribed myself a strict 3-month IDE-less diet using vim and nothing but vim for my daily editing work. Three months because, as mentioned, the learning curve is not exactly flat and it takes some time before all these keyboard sequences are committed to finger muscle memory. For me, there are two questions that I am looking to answer with this experiment. The first is whether vim can actually accomplish all the wonderful tasks that IDEs are good at and that make a programmer’s life easier, such as code completion, automatic formatting, diffing, syntax and code-style checking, debugging support and whatnot. So far, I am pleasantly surprised, though there are still a few rough edges.

The second question is whether typing speed and editing automation actually exceed the possibilities offered by an IDE and whether the promise of increased productivity does materialize. Not sure about this one either, although my vim repertoire is slowly improving and I start to feel like I am not merely hacking my way through the various editor modes anymore. At any rate, the vim editor is both ubiquitous and here to stay. So even if I decide to go back to using an IDE for coding, there is probably a benefit in mastering this tool a little bit better.

Zend Framework Review

zend-framework.gifEarlier this week, I gave the latest version of the Zend Framework v-1.9.2 another test drive. I had previously dabbled in v-1.7.4 as well as a pre-1.0 incarnation of the framework. I will not repeat listing the whole breadth of its functionality here, since you can find this elsewhere on the Internet. Neither will I present a point-by-point analysis, just the salient points, short and sweet, which you can expect to be coloured by my personal view.

Suffice to say that the ZF (Zend Famework) is based on MVC -you’d never guessed- and it provides functionality for database access, authentication and access control, form processing, validation, I/O filtering, web services access, and a bunch of other things you would expect from a web framework. The first thing to notice is that the framework has grown up and I mean this quite literally from a few megabytes in its early days to a whopping 109 MB (unzipped) distribution package. Only about 21 MB are used by the framework itself; the rest contains demos, tests, and… the dojo toolkit… an old acquaintance, which is optional.

The documentation for the ZF was excellent right from the beginning and it has staid that way. Included is a 1170-pages PDF file, which also bears testimony to the growing size and complexity of the framework. Gone are the days when one could hack together a web application without reading a manual. One of the first things to realise is that ZF is glue-framework rather than a full-stack framework. This means, it feels more like a library or a toolkit. ZF does not prescribe architecture and programming idioms like many other web frameworks do. This appears to fit the PHP culture well, though it must be mentioned that most ZF idioms come highly recommended, since they represent best OO practices.

Another thing that catches the eye is the lack of an ORM component, which may likewise be rooted in traditional PHP culture. If you want object mapping, you would have to code around ZF’s DB abstraction and use Doctrine, Propel, or something similar. Let’s get started with this item.

Database Persistence
ZF provides a number of classes for DB abstraction. Zend_Db_Table implements a table data gateway using reflection and DB metadata. You only need to define table names and primary keys. Zend_Db_Adapter, Zend_Db_Statement and Zend_Db_Select provide database abstraction and let you create DB-independent queries and SQL statements in an object oriented manner. However, as you are dealing directly with the DB backend, all your data definitions go into the DB rather than into objects. Although this matches with the traditional PHP approach, it means that you need to create schemas by hand, which may irritate people who have been using ORM layers, like Hibernate, for years. On the other hand, a full-blown ORM layer likely incurs a significant performance cost in PHP, so maybe the ZF approach is sane.

Fat Controller
Like many other frameworks, ZF puts a lot of application logic into the controller, and this is my main gripe with the ZF. It seems to be the result of the idea that the “model” should concern itself only with shovelling data from the DB into the application and vice versa. A case in point is the coupling between Zend_Form and validation. This leaves you no option, but to put both into the controller. I think that data validation logically belongs to the model, while form generation logically belongs to the view. If you pull this into the middle, it will not only bulge the controller, but it is likely to lead to repetition of validation logic in the long run. That’s why I love slim controllers. Ideally, a controller should do nothing but filtering, URL rewriting, dispatching, and error processing.

MVC Implementation
Having mentioned coupling, it would do ZF injustice to say that things are tightly coupled. Actually, the opposite is the case, as even the MVC implementation is loosely coupled. At the heart you find the Zend_Controller_Front class which is set up to intercept all requests to dynamic content via URL rewriting. The rewriting mechanism also allows user-friendly and SEO-friendly URLs. The front controller dispatches to custom action controllers implemented via Zend_Controller_Action; if non-standard dispatching is required this can be achieved by implementing a custom router interface with special URL inference rules. The Zend_Controller_Action is aptly named, because that’s where the action is, i.e. where the application accesses the model and does its magic. The controller structure provides hooks and interfaces for the realisation of a plugin architecture.

Views
Views are *.phtml files that contain HTML interspersed with plenty of display code contained in the traditional <? ?> tags. It should be possible to edit *.phtml files with a standard HTML editor. The Zend_View class is a thin object from which View files pull display data. View fragments are stitched together with the traditional PHP require() or with layouts. It is also possible to use a 3rd party templating system. Given the <? ?>, there is little to prevent application logic from creeping into the view, except reminding developers that this is an abominable practice punishable by public ridicule.

Layouts
Layouts are a view abstraction. They enable you to arrange the logical structure of page layouts into neat and clean XML. These layouts are then transformed into suitable output (meaning HTML in most cases). As you can probably infer, this takes a second parsing step inside the PHP application, which is somewhat unfortunate, since PHP itself already parses view components. While layouts are optional, they are definitely nice to have. I think it’s probably the best a framework can do given the language limitations of PHP, which only understands the <?php> tag. If the XML capabilities of PHP itself would be extended to process namespaced tags like <php:something>, then one could easily create custom tags and the need for performance-eating 2-step processing would probably evaporate. Ah, wouldn’t it be nice?

Ajax Support
ZF does not include its own Javascript toolkit or set of widgets, but it comes bundled with Dojo and it offers JSON support. The Zend_Json class provides super-simple PHP object serialisation and deserialisation from/to JSON. It can also translate XML to JSON. The Zend_Dojo class provides an interface to the Dojo toolkit and makes Dojo’s widgets (called dijits) play nicely with Zend_Forms. Of course, you are free to use any other Ajax toolkit instead of Dojo, such as YUI, jQuery, or Prototype.

Flexibility
As mentioned, ZF is very flexible. It’s sort of loosely coupled at the design level, which is both a blessing and a curse. It’s a blessing, because it puts few restrictions on application architecture, and it’s a curse, because it creates gaps for code to fall through. A case in point is dependency injection ala Spring. In short, there isn’t much in the way of dependency management, apart from general OO practices of course. Nothing keeps programmers from having dependencies floating around in global space or in the registry. A slightly more rigid approach that enforces inversion of control when wiring together the Zend components would  probably not have hurt.

Overall Impression
My overall impression of the ZF is very good. It is a comprehensive and well-designed framework for PHP web applications. What I like best about it that it offers a 100% object-oriented API that looks very clean and makes extensive use of best OO practices, such as open/closed principle, programming to interfaces, composi
tion over inheritance, and standard design patterns. The API is easy to read and understand. The internals of its implementation likewise make a good impression. The code looks clean and well structured, which is quite a nice change from PHP legacy code. ZF still involves a non-trivial learning curve because of its size. I’ve only had time to look into the key aspects, and didn’t get around to try out more specialised features like Zend_Captcha, Zend_Gdata, Zend_Pdf, Zend_Soap, and web services, and all the other features that ZF offers to web developers. If I had to choose a framework for a new web application, ZF would definitely be among the top contenders.

Galileo Troubles

Eclipse GalileoAnother year has passed in the Eclipse universe, and this means another minor release number and another Jupiter moon. Eclipse has moved from 3.4 to 3.5 or respectively from Ganymede to Galileo. Using a small gap in my busy development schedule, I decided to install the latest version this morning. Thanks to broadband Internet, the 180 MB JEE package was downloaded in a breeze and installed in a few minutes. Unfortunately, that’s where things stopped being easy.

When I downloaded the PDT plugin for PHP development, I found a bug in it that prevented Eclipse from creating a PHP project from existing sources. After some research on the Internet, I found that this was a well-documented bug which had been fixed in the meantime. I tried installing the latest PDT release via the Eclipse install & update feature, but the process came to a crashing halt with a message that demanded some mylyn jars that could not be found. Although I had no idea why PDT required that particular jar, I dutifully installed the mylyn plugins with the required version number.

Unfortunately, this did not impress Galileo, as it now demanded other jars when installing the PDT update. – Perhaps a case of workspace pollution, I thought. – Clearly, it was time for a fresh start. I scrapped the installation and started anew with a blank workspace and a new install location. This time, everything seemed to install fine. I was able to create Java and PHP projects. However, Galileo suddenly wouldn’t open *.xml, *.xsl, or *.html files any more. It complained that there was no editor for this content type, which appeared fishy since both web tools (WTP) and PDT were installed. I tried to solve the problem by playing around with the configuration, but to no avail.

After several fresh attempts and considerable time spent with looking up error messages on the Internet, I decided to stay with Ganymede. Since I had wasted my entire morning and since I had some real work to do as well, this seemed to be the best course of action. Maybe I will give Galileo another go when an updated distro package becomes available. With Ganymede I never ran into this sort of trouble, despite having PDT, WTP, the Scala plugin and Jboss tools installed. I am still clueless as to what went wrong and I wonder if anybody else had a similar experience.