Ini files and Apache Commons Configuration

Trashing old software

Trashing old software

The project I’m currently working on uses a simplistic object store for persistence.  The original authors, in their collective wisdom, decided that whenever something needed to be saved they would save it to a hashtable, and use Java serialization to save that to a file.

In a way I can see why they did. It’s a quick way of getting a simple to use object store.  The project has been through several revisions since, but the data store stayed the same.  It should have been replaced with something more robust a long time ago.  I’ll explain more after the jump …

Continue reading

Risky Business

Project Management Porn

Project Management Porn

Anecdotally, I had always known that Software Engineers are terrible at estimation.  I had never realised exactly how bad we are.

Some rules of thumb which seem to pop up now and again, is to take your engineers best estimates, and double them.  Then you’re actually in the ball park.
Continue reading

The Life and Times of E-mail.



BBC News reports that a recently published Microsoft study found that more than 97% of email sent is spam.  Seeing this, I can’t help but feel that email is becoming a victim of it’s own success.  With a signal to noise ratio like this, one as to wonder how long it can be before email becomes ignored as a mainstream communications channel?

Continue reading

Masters Thesis

In this paper we discuss approaches to Incident and Problem management within the context of IT Service Management, and its de facto standard the Information Technology Infrastructure Library (ITIL). We show how the Problem Management process attempts to diagnose problem root causes by applying various analysis techniques to historical incident data.

We propose a new categorisation mechanism. We break the data free from its hierarchical categorisation scheme through the use of a free form tagging system. By allowing all system users to categorise incidents using their own terms, we show that while individuals may differ, the aggregate meta data produced for each incident stabilises.

Further, by the application of PageRank analysis to the relationships between tags and incidents, we hope to show useful and interesting correlations. While these may or may not be indicative of a causal relationship, they are nonetheless, new facts about the system under scrutiny.

We conclude by showing the system shows some merit, assuming a certain set of minimum system requirements. If these requirements can be met, then this approach, can become another tool in the system administrators’ arsenal of system analysis approaches.

Download Thesis PDF

Design Patterns

Introduction to Design Patterns

Within each engineering discipline, engineers have found that when analysing problems, similar types of sub-problems would tend to recur from time to time.  Having had experience from previous project work, they knew that a similar solutions to those problems could be reused.  This became the basis for design patterns.
Continue reading

Unified Process vs Agile Processes

Introduction to the Unified Process

The traditional view of system implementation is seen as a series of steps toward implementation, covering areas such as analysis, design, construction, documentation, handover, etc.  Hay (1997) gives a good undertaking of the traditional approach stating:

Continue reading

A Brief History of XML

Evolution of XML

Extensible Markup Languages (XML) history begins with the development of Standardised Generalised Markup Language (SGML) by Charles Goldfarb, along with Ed Mosher and Ray Lorie in the 1970s while working at IBM (Anderson, 2004). SGML despite the name is not a mark-up language in it’s own right, but is a language used to specify mark-up languages. The purpose of SGML was to create vocabularies which could be used to mark up documents with structural tags. It was imagined at the time, that certain machine readable documents should remain machine readable for perhaps decades.

Continue reading

Introduction to the Zachman Framework

Traditional Architectural Approaches.

The traditional view of system implementation is seen as a series of steps toward implementation, covering areas such as analysis, design, construction, documentation, handover, etc. Hay (1997) gives a good undertaking of the traditional approach stating:

Many methodologies are organized around the “system development life cycle”, an organization of the steps required to develop systems. This is expressed as consisting of:

  • Strategy – The planning of an organization’s overall systems development effort.
  • Analysis – The detailed definition of requirements for a particular area of the business.
  • Design – The specific application of technology to the requirements defined during analysis.
  • Construction – The actual construction of the system.
  • Documentation – Preparation of the user manuals, reference manuals, etc. to describe the system.
  • Transition – The implementation of the system, so as to make it part of the infrastructure of the organization.
  • Production – The ongoing monitoring of the system to ensure that it continues to meet the needs of the organization.

Notice that each of these steps addresses issues of data and function. Data and functions are typically addressed as separate topics, although ideally, they are addressed cooperatively.

Most methodologies portray the system development life cycle in terms approximating these. Some go so far as to give it the acronym “SDLC.” Continue reading

Implementing Business Processes

Introduction to Business Process Engineering

Business Process Engineering (BPR) began life in the early 1990s. Barothy, Peterhans, & Bauknecht (1995) give a good introduction to the state of Business Process Reengineering at the time, stating:

Over the last few years we have observed the emergence of a new field or phenomenon in MIS practice and research: Business Process Reengineering (BPR). Since the publication of Michael Hammer’s article “Reengineering Works: Don’t Automate, Obliterate” in 1990, reengineering became a new and hot “buzzword” in management. Despite a lack of clear understanding organisations of today cling to it as the ultimate panacea in order to realise major improvements in productivity, quality, time and profitability. They also see reengineering as a way to adapt their business to faster changing environment and as a new paradigm in the deployment of information technology (IT). In order to sell services, consulting companies never tire of glorifying success stories like the reengineering of Ford’s accounts payable, or Mutual Benefit Life’s insurance applications process both resulting in “order of magnitude” improvements. But companies also learn the hard way that the radical redesign of business processes, by fundamentally rethinking the way business is done, bears major risks, is a highly complex change task, and may easily end in failure.

Continue reading

Adapting ITIL to Distributed Web Applications

Introduction to ITIL

The Information Technology Infrastructure Library version 1 (ITIL) was initially published by the Office of Government Commerce in the year 2000. ITIL is a broad framework of best practices which enterprises are using to manage their IT operations. This quickly grew to over 30 volumes within the library, so when ITIL version 2 came to be released a concerted effort to consolidate the processes described into logical sets was attempted. ITIL v3 continues in this vein by consolidating into five core titles:

  • Service Strategy
  • Service Design
  • Service Transition
  • Service Operation
  • Continual Service Improvement. Continue reading