Wednesday, December 05, 2007

Lerning by imitation

Children seem to learn by imitation according to an article by Derek Lyons et al in PNAS. I would go further and say "So do I", a lot.

When imitating he argues that children cant tell which actions are necessary and which actions are irrelevant to achieving the goal, easily resulting in something he calls overimitation. Seems like there is no skeptic filter screening actions being imitated, although children would otherwise be able to identify irrelevant steps as silly.

I guess this occur a lot in IT (among other disciplines) as well. Part of what is done has no real relevance to the problem at hand. The procedures just happen that way as a result of accidents being replicated and accidentally frozen behavior, just because someone happened to originally do it that way. Somebody may finally realize that some parts of procedures are not bringing any value and improve, but that seems to take ages.

Tuesday, November 20, 2007

Working from home

According to an article in Journal of Applied Psychology it is beneficial both for individuals and companies to arrange for telecommuting.

As a occasional telecommuter i agree that being able to work from home once in a while has positive effects for my personal well being and probably also for my employer.

The authors conclude with the statement that:
there is a downside of higher intensity
telecommuting in that it does seem to send
coworker (but not supervisor) relationships
in a harmful direction. Some of the
complexities of these consequences have
yet to be explored, but the evidence and
theory reviewed here suggest that they can
be managed effectively through informed human
resources policies.


Certain software development methodologies (like XP) rely on close interaction with both colleagues and customers. If you are supposed to do pair programming it will not work if the other person is working from home. Perhaps in the future someone will figure out a tool with which pair programming will be possible over distance, who knows.

A practical way would be to ensure that people still meet often enough for the necessary exchange and feedback too occur and adopt for example some instant messaging tools, although a tool can not replace the real thing.

The study does not directly compare what type of work you do and what kind of feedback mechanisms there are in the workplace or what tools you have access to. I would however guess that as long as the feedback keeps coming it is possible to be as efficient from home as anyone in the office.

Monday, October 22, 2007

Useless policies

Some laws, regulations or policies can be very well motivated or self evident, but others are more difficult to motivate. The policy may for example be a legacy from an earlier era or issue that has ceased to exist. Other are just based on wrong assumptions from the start, perhaps invented to locally optimize some aspect in the organization at the cost of the whole. Bad policies can also be the result of corruption or ignorance.

But what happens to policies that is not current any more in your organization? Typically it seems like they at some point stop being enforced. A policy that is not being enforced is kind of useless. I would further more say that it is even more damaging to society or the organization at large, since it can undermine the trust between individuals in the organization. "If one person can break that policy why can't i, break some other policy?" seems to be the reasoning...

The efficient way of dealing with bad policies would be to just admit that the policy and the policy maker was wrong. In society bad laws are replaced by newer laws eventually, but within an organization it may not be possible to have that kind of overhead. Motivation for the policies should in stead be made clear so that spotting wrong assumptions is possible and enabling corruption to be ruled out as the primary reason for the policy. I wonder if this could actually happen in IT...

Friday, September 28, 2007

Agility and Scrum

I was recently listening to some presentations from a conference hosted by the Belgian Java User Group BeJUG on http://www.parleys.com/. There are at least four excellent presentations that i hope people would listen to.

Kevlin Henney has two presentations dealing with different aspects of being agile (part 1 and part 2) dealing with the necessity of getting feedback and adapting to change when attempting do undertake IT development.

Giovanni Asproni also has two presentations (part 1 and part 2) explaining how Scrum works and can be adopted, and under what conditions Scrum will not work.

Tuesday, August 28, 2007

Risks and management

Recently i read John Scarpinos column How poor management skills jeopardize software quality dealing with willingness to take risks, fear and lack of communication within IT and a fascinating article in Evolutionary Psychology titled Towards the development of an evolutionarily valid domain-specific risk-taking scale by Daniel J. Kreuger et al.

In business today taking risk seems very attractive to management. Taking a certain amount and type of risk can bring a competitive edge to the manager. The manager willing to take the risk can, if he succeeds make a better result than his colleagues, giving a better opportunity to become a more prominent manager in the future. Somehow it seems like business today is not too different from what evolution has wired into us over time when humans were hunters and gatherers. Fear is in a sense a very normal feeling, i suppose our ancestors were both afraid of the animal they were hunting and of starvation at the same time. Only thing is that today we today seems afraid of managers and failure to get a good result.

I'm sure that at least in some domains there is value to a very rigid Quality Assurance process where by risk is minimized, but i don't subscribe to the idea that QA should be allowed to interfere with business risk taking or innovation. What i more consider problematic is first of all the trend by managers today not to be present taking responsibility for the risks taken when things go bad or not sharing the benefits from the risks taken. Evolution should however have given us the ability to deal with cheaters since this is neither any new kind of behavior... Second problem as i see is that managers don't know or either want to know what kind of risk they are taking. This can be related to a basic lack of communication or lack of presence of the manager. An other way of viewing the problem is that employees out of fear or whatever hide unpleasant issues from their managers.

Managers ruling by fear and authority do however have one significant disadvantage, since they are not likely to get anywhere near a full set of information to base their risk taking on. I would even go further to say that strict formal process oriented approaches will always come second to managers participating in a dialog as equals, being present and acknowledgeable and fair in their actions. Knowingly taking small affordable risks can give great rewards, and if things go bad, just use plan B.

Wednesday, July 25, 2007

Challange of introducing ideas into orgainzations

Bringing in consultants are often seen as a way of introducing new ideas into an organization. According to a new study done at Warwick Business School this however seems to be misleading.

It seems that managers bringing in new people hire someone like themselves. Any new idea or person introduced will not be accepted if the new person is to unlike the manager and the others in the organization.

The problem is that if the organization gets too similar there will be no diversity and very little difference of opinion, and changes needed may not find a place to evolve. If everybody is of the same opinion there is a great risk that good ideas and opportunities will be lost. Without new ideas there is likely to be stagnation while competition is evolving.

Inside diverse organizations on the other had there already exist a multitude of novel ideas that can be used. What the consultants specifically can do in a diverse environment is to bring ideas into attention. In a non diverse environment there is not very much of a possibility of introducing anything and the only thing left to do is execute whatever is planned.

Monday, June 18, 2007

Half of all IT projects late according to HP studdy

In todays Finnish language business magazine Kauppalehti there is an article citing HP research, stating that one out of two IT projects are late. Also one out of two projects are stated as being over budget. The main reason given is that coordination between IT and company management is lacking, outsourcing is failing and to a lesser extent changing or unclear requirements and a lack of resources. The research was compiled based on questions sent out to a large number of IT executives. However, the answers obtained to be over optimistic and that a change made to the time table for a project is not reported as being late. A better estimate would according to the article be around 90 % late / over budget.

What is interesting however is that the projects being late has a direct impact to the profitability of the companies. I would say that this sounds like the ideal market condition for consultancy and outsourcing vendors, whatever goes. I wonder when management is going to start taking interest in where the IT money is spent and why the promised returns of investment is not showing.

Further more management seems to be unable to do anything about the situation. Decreasing scope and increasing number of people on the project are the most common way of reacting to a problem in a project. Only one out of five confess to sacrificing quality over timetable, the rest probably never intended testing anyway or are too embarrassed to admit to this. Adding additional coordination will probably only add to the over all cost without any significant improvement in project delivery.

Tuesday, May 29, 2007

Learning from uncertainty

It seems easy to know what caused any failure afterward. At least it is easy to come up with theories of what have caused any given failure. Problem is that there is often no way of verifying the causes. Predicting into the future just seems impossible.

In medicine we have clinical trials where drugs are given under very controlled conditions, some patients even getting a placebo as replacement for the real drug. In IT this would be difficult to arrange.

IT is a field with lots of uncertainty involved. Some things will work sometimes, but not always. A successful setup in one project may lead to a big time failure in the next. A setup failing in one project may work perfectly in the next. There is no way that you can beforehand know if you will have positive or negative surprises and if they will impact your endeavor at all.

Only fool proof way to safeguard against unpredictability seems to be to progress in small scale steps, failing early and not in an expensive way. If we are allowed to try enough times we will eventually succeed.

There is an excellent Tech Nation interview with Nassim Nicholas Taleb dealing with unpredictability and impact on business available at ITConversations, go check it out.

Monday, April 30, 2007

Learning efficiently from failures

There does not seem to be very many lessons to be learned from sucsessful projects according to an article in IEEE Spectrum. From failures on the other hand you can always learn a lot.

Being efficient, you should at least try to learn from your own mistakes. On an organizational level it would make sense if everybody was able to learn from all mistakes done in the whole organisation. On a global level it would make sense to try to learn from all mistakes evere made. Access to information is however limited, and there is probably not enough time to get familiar with all failures in your domain.

If you cover up the mistakes, chanses are that somebody will have to experience the same problems again. Opening up your experiences for other to learn from can help others in your organisation to become more sucessful. Getting comments and feedback from others will further more make you see more clearly what the the problem was and even come up with better solutions to avoid similar problems in the future.

However, nobody likes to admit failure. I think organisations must become more able to accept and tolerate small falures, and start to encourage people to learn from failure in stead of covering up. If failures are detected and admitted at an early stage there is time to prevent the big failure that probably is lying ahead. Big failures are anyway difficult or impossible to cover up.

Friday, March 30, 2007

Efficient frameworks

Using complex frameworks are often motivated by increased efficiency. Some things are however inevitably complex.

Real time thread synchronization across processes and machines is for example hard. You have to avoid dead locks, make sure that your processes yields execute threads in a predictable manner and guarantees execution of all tasks on a given time. Getting all this correct is not easy, not even for experienced developers.

Frameworks are motivated that they make hard things more simple. A common problem is however that they only enable you to shoot yourself in the foot more efficiently, which is most often not what you intended to start with. In order to utilize the framework efficiently you need a thorough understanding of the underlying problem domain and also be able to solve the problem in terms compatible with the tools and possibilities the framework gives you.

Most frameworks are not developed with your problem in mind and general purpose frameworks are most often too generic to be useful. If your need exceed what is intended with the framework of choice, you end up adding more complexity than you would need without the framework.

Complex tings can be made to work, no doubt. For one time fixes, in order to get something working in a quick and dirty fashion the additional complexity does not matter very much.It is just a one time penalty to pay. The trouble starts when modifications are needed, since somebody must either remember or figure out how it works, probably from the source code, which may or may not conform to any pattern or convention.

People in general tend to fall in love with their framework of choice, If the framework suits your every need, there is a benefit if all developers on the project like working with the framework. However, if the framework does not fit the problem you are going to have a tough time ahead.

Monday, February 12, 2007

Crippled by design

Designing optimal solutions is difficult. Settling for a working solution is every day life, but most IT solutions seem to get seriously crippled in one or more aspects at some point in time. The system can be difficult to use, costs of development can sky-rocket, it can be impossible to maintain, it can have limits in scalability. But even crippled software can be useful.

There is probably no single one and only explanation why this crippling is happening. I'm sure that many causes combine. Many of the problems giving the system a crippled appearance can be fixed, given time and resources, but there always seems to be new issues popping up, like peels on an onion. Some times a crippled system delivered today is better than the most optimal system delivered too late.

Generally it is said that it is cheaper to fix a problem that would cripple a system early in the design phase. The problem with this is that early you don't necessarily know every aspect that will appear crippled, even how thoroughly you look at your design. It is just no possibility to know every aspect of how the system will appear before it is actually deployed into production. There is some value to the design up front approach, but on a too detailed level it is doomed to result in analysis paralysis.

When designing software and IT systems there are more degrees of freedom than can be perceived. You can implement a system in many, very many different ways. Most ways of implementation will not differ very much from each other when done, and most will have no influence on the end result, but each possible equal implementation can be given a most important role due to decisions made very much later on. Reversing the latest decision can sometimes only bring negative side effects. Going back and undoing the faulty decisions, made long ago, is not always the most pleasant thing in a normal project setting, but in agile projects doing refactoring this is actually one aspect that in my opinion makes agile approaches better, if only the faulty decision is found.

Real clear failures are just as rare as optimal solutions. Mostly the reason for failure is that projects run out of money. With agile approach there is a natural way of closing down unsuccessful development work. Perhaps we should be ready to scrap and abort more traditional software projects also that are showing signs of crippledness. I think that even the most severely crippled IT system has some value in that it can teach us a lesson perhaps enabling us to doing better next time. The most important thing to remember is that it is your actions as a designer, either the ones that you do or the actions that you omit doing that is going to make or break the system. There is no escaping, you are responsible for your design decisions.

Tuesday, January 23, 2007

Measurement and feedback

The preoccupation with measuring every aspect of IT efficiency seems to be ever increasing. It seems like there is no limit to what piles of rubbish management and their followers are trying to collect in order to find the one measure that will tell them how their organisation or department is doing.

The problem is that there are lots of things you can measure, and some may even tell you something about the actual performance, but most measurement seems to be a waste of time. Some measures are too abstract to be of any use, like "rate of success". Others are just too detailed to be useful, like lines of code written per day.

In my opinion you can spend your time better by arranging for your teams to have feedback loops in stead of figuring out new and better measures. The shorter the feedback loops the better, since if things change quickly you need to react quickly. With sufficient feedback you can be fairly sure that you are doing the right thing, or else you will know about it quickly. There are numerous examples from nature where feedback loops regulate and interact in order to steer processes and their outcome, examples ranging from metabolism to neural activity.

One of the keys here is that the sensors in nature are integral parts of the process like for example proteins catalysing the conversion of one substance to an otherbeing inhibited by the end product and stimulated by the raw material. The centralized control seems to be kept to a minimum, and only exercised as a means of giving over all direction, so why not mimic the same in IT?

Empowering everyone to make sane decisions based on feedback, or even listening to feedback at all must feel threatening to management accustomed to safely measuring and them self interpreting the result. However, if what counts is the end result and over all efficiency, the score seems clear.