Thursday, November 26, 2009

Will Google Wave Re-Define the Blog Paradigm?

As we all know, blogs are a social networking tool to desseminate information and capture feedback. The tools currently in use almost provide the mechanism for discussion but don't quite get there. Google Wave holds promise of bringing blogs to the conversational promiseland - at least in the world of typed messages.

Currently, the process works like this: Someone writes a blog article that is typically several paragraphs long with several points linked together by logical reasoning. After visitors read the article, they input a comment at the bottom. The comments usually consist of general commentary on the article subject matter, specific commentary on one or more points within the article, or extending the article through logical relationships to subject matter not discussed in the article.

Commentary from a blogs reader ship currently consists of narrative pointers into the article, e.g. "I disagree with you when you state...". These narrative pointers become exponentially more cumbersome with high volumes of visitors and comments. Discussion is lost because it quickly becomes impossible to follow the massive array of narrative pointers contained in the set of disparate comments.

Google Wave blows up this paradigm by allowing inline commentary within the article. Comments are no longer aggregated by visitor containing multiple points at multiple locations within the article. Comments become decentralized and are input directly in the article at the location where the specific points are made. This allows follow-on visitors to add to the discussion on just those points with which they have interest and absolved of the prerequisite filtering of content and comments that currently exists. The playback feature is also a pretty cool way of being able to read the article without comments and follow visitor feedback sequentially as they occurred.

I posted much of the text from this blog to a wave that I made public. Virtually immediately after posting it, I started getting inline feedback from a Wave user. If you have a Google Wave account you can view it by searching for with:public google notarangelo. There's some interesting commentary.

If you want to contact me using this new and interesting medium - and what I think is the future of blogging - then get a Google Wave account (I have some invites available if you need one) and ping me jack.notarangelo@googlewave.com.

You can read more about Google wave at http://wave.google.com/.

Tuesday, November 10, 2009

Economic Darwinism

Over the summer, I wrote an article in this space called It's Winter in July. The premise being that in these tough economic times everything you've done to this point in your career has culminated to your current market value. This article extends that subject in the form of what I call Economic Darwinism. Surprisingly, my Google searches on the term came up with hits but not in the way that I think of it.

Economic Darwinism in my mind is related to survival of the fittest. We have been in this downturn for over a year and it's been difficult. Lot's of layoffs, attrition, hard looks at the way we do business, and in many cases lots of change.

For the past year or so you have been in survival mode which is a good and healthy process. If you have survived being in survival mode then it likely means that you are built for what you are doing.

When you look around at your respective team members you likely see a strong set of individuals that as a group can deliver whatever needs to be delivered. In a culture of meritocracy, which for the most part describes software engineering, we have been transformed into lean teams with a kick-ass set of players that gets shit done. Yes, it was painful getting to this point, but the result is good.

If that accurately describes your situation, then from a leadership perspective the view should be "The recovery starts with us. The inevitable new phase of growth will be through this team. And through that inevitable growth there are opportunities for everyone."

Morale, whether poor or euphoric, is a state of mind. The thoughts bouncing around your cranium, regardless of whether they are verbalized or not, influence those whom you lead. It's your belief system that will - and does - significantly influence whether your team feels confident or insecure. Look around, see the strength of your team, and know that the recovery starts with you. If you believe that, confidence/morale will increase. If you choose otherwise then you should expect the malaise to continue.

Let there be no doubt that which ever side you land is all in your head. Your harvest is in part dependent on the seeds sown now in the Spring of this recovery.

Monday, October 26, 2009

Cloud Computing: Hype vs. Reality

The term cloud computing is a recent branding effort for an umbrella set of hosted service offerings. Of course hosted offerings have been around for decades - SalesForce.com has been providing an Internet-based (i.e. cloud computing) CRM solution since 2000. There are also a plethora of other vendors with offerings related to Platform as a Service (PaaS), Infrastructure as a Service (IaaS), and Software as a Service (SaaS).

Cloud configurations fall into three categories – Public, Hybrid, and Private.

Public: Public cloud offerings are those that are typically subscription-based where all of the hardware and software purchases and maintenance are abstracted away from the customer.

Hybrid: Hybrid cloud offerings are provided with a combination of abstracted hardware and software with private infrastructure configurations. It is presumed that this will be the most likely configuration as cloud services become more mature and mainstream.

Private: A private cloud is an Internet-based offering where the entire infrastructure is managed by customer personnel and typically has security applied to limit usage to authorized personnel.

There are some who will refute my definitions above as too limited because they consider on-site virtualized environments as a valid cloud configuration. In my view, if the service isn't Internet-based then it's not in the cloud. Time will be the arbiter of this amorphous aspect of cloud computing.

So why has something that has been around for so long now become worthy of re-branding? The major drivers are the ubiquity of the Internet, the reliability of Internet connections, the maturity of web-based interfacing technologies, and the rapidly expanding blur between desktop and web.

The remaining obstacle for most organizations with moving to the cloud model is security.
Moving sensitive data and intellectual property to a cloud provider is a risk to be considered carefully. An internal infrastructure can be completely isolated from the outside world where malicious activity is voracious to say the least. In fact, it's safe to say that because of intellectual property and the sensitive nature of the data being stored, that for some companies there will always be a need for an internal infrastructure. Government regulations regarding privacy and organizations obligations to ensure protection of personal data will continue to be a driving force for internal resources.

There is also the counterpoint that because of perception and the fact that their services are in the cloud, cloud vendors focus significantly more attention to security than many internal IT teams in organizations where IT is not the core service. That is a valid point that deserves consideration.

You could also make a case that cloud computing implemented as the status quo across all industries and organization size - in its current form – could be considered a national economic risk. As cloud computing proliferates and third parties develop widgets of functionality as a subscription-based model, resources from disparate cloud environments will become interwoven into applications resulting in nested dependencies. When developing the next killer application, why invent a wheel when you can subscribe to one and integrate it into its processes? A benign or malicious act that triggers a negative event across the Internet could have significant economic consequences depending on its impact. There could also be inherent compatibility issues when cobbling unrelated cloud-based widgets together.

Cloud computing has also been touted as the return to dumb terminals with all software being cloud and subscription-based. I can not see that happening in the short to mid-term. Much of this article was written on my vacation without wireless Internet access. My writing was done offline using software local to my computer. Installing binaries locally is going to be a way of life for the foreseeable future albeit possibly a bit neutered in some instances since more and more of what appears to be local functionality in many applications is actually sourced from the web. (e.g. online help, lists of templates, installing add-ins)

If you are reading this because you are interested in cloud computing and are considering how to leverage it for your organization then you are ahead of the curve. Take your time, develop a long-term strategy (including governance policies), implement in controlled phases, and seek advice from professionals to ensure your implementation is the right one for your organization.

Monday, September 28, 2009

Social Networking, Privacy, and Identity Verification

The value of total obscurity on the Internet is overestimated and is more destructive than constructive. Our fears about privacy infringement are largely irrational; at least in the context of social networking.

The ability to be totally obscure allows those bent on malicious intent to cloak themselves in an anonymous or fake identity – allowing them to say anything they want, about anyone they want, and without any repercussions.

It’s still rare, but sites are starting to require identity verification and that trend, in my opinion, is certain to continue. It has to. Identity verification provides for accountability and enforces a set of socially acceptable rules of etiquette just as they exist in the real world.

Most of us aren’t afraid to have our identification verified for credit cards, licenses, loans, job verifications, and many other situations. The large majority of us willingly provide our credit card information to purchase goods over the Internet.

Is caller ID a bad thing? Don’t most of us frown upon those that restrict the rest of us from seeing their identity when they call by setting their caller ID to private?

I just joined a social networking site called BestThinking.com that requires subscribers to have their identification verified. And you know what? I was not only willing but found it refreshing!

They do it in a very clever way with little personal info - nothing that you would consider a security risk to divulge – and a short multiple choice quiz of information about you to make sure you are really you. The bottom line is that social networking sites where those involved are willing to be known by their real identity adds credibility to discussions and opinions.

Remember the sense of foreboding people had when purchasing goods over the Internet was first introduced? As time went on we realized that our fears were irrational. Yes, there is identity theft and sometimes on a colossal level. That is what comes with forging a new frontier. It takes time to get it right. In spite of that, we continue to purchase goods over the Internet because we consider there to be considerably more value than risk.

We are now forging forward further into the Internet as a new frontier where a new level of identity verification will become increasingly more required and hence a more credible place to share information.

Do I think we should encourage legislation to force identity verification as a prerequisite to participating on the Internet? That would be a hearty No.

I do, however, think identity verification will become ubiquitous within this coming decade for the sheer reason that deep down we want to mold the virtual world into one that resembles, as closely as possible, the physical one in which we live.

Why?

In addition to basic familiarity and the comfort that brings, the physical world is more soundly implemented; especially with regards to established rules of engagement. The Internet adopting attributes of the physical world, which have been tested, modified, and enhanced over many millennia, will help to realize the potential of the virtual world. Identity verification brings the Internet one step closer to maturity and in this corner it’s a welcomed step.

I'm interested in your thoughts.

Saturday, September 12, 2009

The Future of Crowdsourcing

Crowdsourcing, for those who are new to the term, is the concept of entities outsourcing problems to a large and typically unrelated group of people. For example, Netflix has an ongoing competition - open to anyone - to develop an algorithm to predict customers ratings based on their past ratings that bests Netflix's proprietary algorithm by 10%. The prize is a million dollars.

Crowdsourcing is not a new concept. The Longitude prize was an open competition established by Great Britain in the 18th century to solve the maritime problem of discerning longitude at sea. John Harrison was awarded the prize for his invention of the Chronometer. He wasn't treated well either. The dude solved the problem and was delayed the prize money for a full 30 years.

John Harrison's poor treatment by Great Britain illustrates one potential problem with crowdsourcing however there are others - little to no contracts, lack of continuity with contributors, potential lack of interest thus little to no participation, low to no wages, and risk of malicious intent.

The global recession has resulted in rapid growth of crowdsourcing due to two major factors:

  1. It is often cheaper for companies to crowdsource solutions as opposed to directly hiring or contracting with professionals.
  2. There are lots of people out of work so the pool of willing participants is high.

When the economy turns around will the resources currently involved in crowdsourcing dry up? Will the competition for intellectual property and time to market pressures move corporations back to more traditional methods that are more easily managed?

My answer to both questions is no.

With corporations having the ability to tap into a world population for ideas and solutions there is bound to be better results than with a small set of specialists. Crowdsourcing offers the possibility of tapping into brilliance without having to interview for that special person who will develop that next killer product.

As far as the contributor is concerned, crowdsourcing offers recognition, flexibility, collaboration, pay, and other self-satisfying attributes. The city of Los Angeles provided a survey to it's population asking questions such as "What services should be cut to balance the budget?" with a list of city services from which the constituent may choose. This type of crowdsourcing relies on non-monetary rewards but still has a high rate of participation.

One of the more interesting things to consider is how crowdsourcing will effect various occupations such as those in the creative design industries. When creativity is outsourced to the world, there is a potential for deleterious effects on wages and the number of permanent positions in those fields.

In my opinion, crowdsourcing, like social media, is in its Wild West phase. There will be significant movement and change along the way and its current incarnation will be unrecognizable 5-10 years from now.

As crowdsourcing models mature and become easier to manage the majority of us will be involved in some sort of crowdsourcing as an inherent part of our lifestyle. Just as I continue to manage my LinkedIn contacts and update my status on Facebook, I will likely also be contributing to my favorite crowdsourcing activities.

Tuesday, September 1, 2009

Kano Model for Prioritizing User Stories/Requirements

Regardless of whether you subscribe to Waterfall or Agile as your preferred methodology, some form or prioritization of requirements (Waterfall) or user stories (Agile) will take place.

One of the more difficult tasks is helping the user community to determine the varying degrees of importance of each requirement. One method to help with this process is the Kano Model.

The Kano Model is named after Professor Noriako Kano who in 1987 developed a theory for product development to classify features into five categories based on answers to questions about the specific features. Following are the five classifications.

AttractiveDelighted to have but unexpected.
One-DimensionalFeatures customers compare with your competition.
Must-BeA must have feature.
IndifferentNeutral about the feature.
ReverseThe customer do not want the feature and actually expect the reverse of it.
QuestionableIndicates that the customer is unclear about the nature of the feature

The features are classified by asking the customer two questions – one functional and the other dysfunctional – to which the customer selects one of 5 possible answers.

The questions are:

  1. How would you feel if the feature was present in the product?
  2. How would you feel if the feature was absent from the product?

The answers from which they may choose are:

  1. I would like that.
  2. I require that.
  3. I don’t care about that.
  4. I can live with that.
  5. I dislike that.

The initial reaction of most people when posed with the functional/dysfunctional question think “won’t the questions offset each other?” As it turns out most often they don’t.

An excellent example that I’ve heard in the past is with a milk carton that has a thermometer on the outside so the customer can see what temperature the milk is. One may select answer 1 I would like that as the functional answer but select answer 4 I can live with that for the dysfunctional question.

The answers are then compared to a matrix below to arrive at the classification. The letters in the middle of the matrix represent each of the classifications via its first letter, e.g. A=Attractive. To take our earlier example of the thermometer on the milk carton, the functional was Like and the Dysfunctional was Live With thus the matrix indicates that the classification is A=Attractive.

Dysfunctional

LikeExpectNeutralLive WithDislike
FunctionalLikeQAAAO
ExpectRIIIM
NeutralRIIIM
Live WithRIIIM
DislikeRRRRQ

With the Kano Model, one is able to ask the customer two short and concise questions and ultimately gather critical data regarding the importance of the feature to the product. As a result, a development team can easily discern what’s mandatory, what’s nice to have, and where the land-mines are.

There are additional techniques that can be used in combination with the Kano Model to further prioritize requirements like weighting features but alas that will have to weight for another day.

Tuesday, August 25, 2009

Estimating Defect Production

Estimating the number of defects a project should expect to produce and remove is one of the least talked about subjects in software development. In this article I'll provide some industry statistics to help with that estimation process and hopefully convince you of why Agile will likely reduce the level of effort associated with defect fixing.

There have been studies performed to estimate the number of defects that a software development effort will likely encounter during its life cycle. Obviously the larger the project the greater number of defects one should expect to encounter. If you follow this blog, you’ll know Steve McConnell’s book "Software Estimation: De-mystifying the Black Art” is one of my favorites and is always by my side. One of the chapters discusses estimating defects. In his book, McConnell references a Capers Jones (2000) study that indicates a reasonable expectation is to have 50 defects per 1000 lines of code (LOC).

However, a more granular look will show that smaller projects will experience fewer defects than larger ones; for example a project with fewer than 2K LOC will likely have 0-25 defects per 1K LOC where a project with over 512K LOC will likely have 4-100 defects per 1K LOC. Keep in mind that factors such as your programming language and other technologies will effect this estimate. It’s always more accurate to use historical data to estimate effort but in lieu of that, these data are better than nothing at all.

Here are a few more factors to consider:

- Defects occur at all points during development e.g. requirements, architecture, development, documentation, etc.

- There are best practices for defect removal such as design reviews, code reviews, prototyping, unit testing, system testing, and various levels of beta-testing – each of which have different removal rates. The highest removal rates come from formal code reviews (45-70%), prototyping (35-80%), and high volume beta testing (60-85%). Surprisingly (at least to me) one of the lowest removal ratings comes from regression testing (15-30%).

Here is where I transition from conveying data to providing my opinion on why Agile allows a development team to reduce the effort associated with defect removal when compared to Waterfall.

With Waterfall, the life cycle stages occur in a sequential manner,i.e. requirements, design, development, and test. Although it could be reasonably argued that the number of defects may not significantly change between Waterfall and Agile I think it is unreasonable to assume that the effort to remove them will remain the same. Defects introduced during the requirements gathering and design stages that are not found until development or even later have a snow ball affect because they pile up on one another.

Because of it’s iterative nature, Agile allows for requirements, design, and development defects to show themselves virtually immediately after they have been introduced. Dealing with defects introduced over a single two week sprint is a lot easier to manage than untangling a slew of defects that occurred over several months. We’ve all been on Waterfall projects where our integration testing revealed flaws that have required rewriting methods and even entire components. I contend that those types of wholesale defect removal efforts are mitigated substantially through continuous integration, daily stand-ups, two week sprints with customer tests immediately thereafter, as well as the other feedback loops inherent in Agile.

I’m interested in your thoughts.

Tuesday, August 4, 2009

The Balance Between Talent and Team

I came across an old Joel on Software (Joel Spolsky) article from 2005 called Hitting the High Notes that I thought was particularly thought provoking. One of the premises of the article is that a single brilliant developer is more valuable for innovation and invention than an army of mediocre ones. His reasoning is that the brilliant developer is capable of thinking things and creating things that are virtually impossible for mediocre developers. The best line that captures the essence of the article is "Five Antonio Salierie's won't produce Mozart's Requiem. Ever. Not if they work 100 years."

But how often does the average organization require the elegance and brilliance of Mozart to compose the software equivalent of his Requiem? When viewed from the context of what the overwhelming majority of software engineers do every day, the answer is “almost never”.

Like Joel Spolsky, I’m a big fan of greatness. Real life accomplishments can often be better than fiction. However, software is as prolific and ubiquitous as there are people and it is because it doesn’t require Mozarts to create useful software to serve an organization and save tons of time and money and provide insight into business and industry trends.In fact, the effect of hiring purely on programming talent is at best problematic.

Let’s agree that software engineering is intellectually demanding. You can't be a dolt and do this job effectively. Technologies and subject matter change too quickly so having intellectual horsepower is a must. It’s also my opinion that for those same reasons one needs to understand that being a software engineer is not a 9-5 job.

As an organization, having intelligent and committed individuals is still not enough. The New York Yankees proved that by trying to buy World Series championships year after year through the assembly of the most talented players alone and failed miserably (As a Red Sox fan I consider that a success story).

That talent-alone strategy doesn’t work in software engineering either. The whole doesn’t necessarily have to be more than the sum its parts to be effective but in those cases one should expect mediocrity and not a whole lotta fun for those involved.

To accomplish cool things, if not world changing things, and have fun doing it there has to be synergy. Included with talent, intellect, and commitment, everyone should respect one another as people and professionals, understand what each other needs to be effective, compliment each others talents, and together fill all pieces of the engineering pie. If you think getting all these pieces to fit together properly sounds really, really, hard - I'm in full agreement!

Accomplishing the assembly of the aforementioned team is certainly a non-trivial task. As with everything in life, it requires compromise and balance. In this case, the compromise and balance is between the individual and team attributes needed, weighting them accordingly, and then hoping you’re right when hiring your next engineer.

Monday, July 27, 2009

It's Winter in July

Aesop's fable of the ant and the grasshopper still holds true – you better prepare for winter or you’re screwed. In case you haven’t noticed we are in an economic winter, my friends, and either you have prepared for it or you have not. It's really that simple.

These are difficult times - no doubt. With the economy the way it is, people will often feel fear and insecurity about their positions. If they've been laid off they may be concerned about when, or even if, they'll find another position.

Admittedly, sometimes being in the wrong place at the wrong time can result in losing one’s job. However, more often than not, the security we have at our current employer, and even the difference between having a job and not, is based on our present value-add to the organization. Your entire career has culminated to this point in time and the perception of your relative importance to the success of a team and organization is essentially set. It was built over the course of years both at this current position and every one before it.

I’ve been a solid ant for a number of years now. Unfortunately, I’ve had to learn hard lessons as a result of embracing my inner grasshopper. The repercussions of being cavalier about an important matter as the care and feeding of one’s career will eventually result in pain. Those grasshopper times, however painful, turned out OK for me and if you are experiencing a harsh winter it will likely be OK for you too. The trick is to constantly reflect on whether you are thinking like a grasshopper or an ant – and know and accept the consequences!

Over the course of a bunch of years I’ve come to realize that no one is entitled to anything. Everything you have and is dear to you is on the table. So have a vision, be pragmatic, take risks, and most certainly, be assertive with your career. If you do those things well you will not only best protect that which is now yours but you will grow as a person too.

Wednesday, June 24, 2009

Natural Leaders

Did I intend for this space to be used for discussions about leadership? No. However, the natural risks and pace of change that are inherent with software engineering makes leadership incredibly vital and high quality leadership hard to come by. When I came across an article written by Gary Hamel called How to Tell If You're a Natural Leader I immediately thought that it was important to share it here.

What made it most thought provoking are these paragraphs:
  • "Think about your role at work. Now assume for a moment that you no longer have any positional authority—you’re not a project leader, a department head or vice president. There’s no title on your business card and you have no direct reports. Assume further that you have no way of penalizing those who refuse to do your bidding—you can’t fire them or cut their pay. Given this, how much could you get done in your organization? How much of a leader would you be if you no longer held even a tiny, tarnished scepter of bureaucratic power?".
  • "... how much of your power comes from what you are (the VP for HR, for example), and how much comes from who you are ..."
In my mind, that's powerful stuff. Of course leadership is enhanced by title and power - perceived or otherwise. Who isn't going to follow someone, at least temporarily, who can dump your ass into the cold winter of this recession.

Yeah, Gary Hamel's article could be useful as an exercise in self-examination by existing leaders but I think a lot of that internal dialog has already happened. At most, this discussion could be a catalyst for calibrating oneself or possibly a reminder of some of those leadership principles. That's not what I find most interesting about this article.

I think this article is most useful for those that are not currently in an officially sanctioned leadership role. It's for self-examination of whether one is acting like the leader one thinks they are, or thinks they want to be, or possibly reluctant to be, or even in denial of being.

Another valuable aspect is to identify the natural leaders on your team and within your organization. Who is being followed without a title? Who on your team and organization tends to be at the center of things? Who do people go to for questions and advice? Those are the people that will build the teams for the future who are entrepreneurial, innovative, and mission oriented. You will need them because today's workforce is transient - and this is especially true in software engineering.

I recommend reading Gary Hamel's article How to Tell If You're a Natural Leader as well as subscribing to his Management 2.0 blog.

I'm interested in your thoughts.

Wednesday, June 10, 2009

Kanban Development Methodology

I was introduced this week to a development methodology called Kanban. For those of you who don't know the origin of the word Kanban, it's manifestation is as a physical card as part of the Toyota Production System (TPS) that signals the moving and production of parts in a "pull" system; a system where parts are made available as others are pulled into use downstream. The objective in a manufacturing setting is to control inventory based on the speed of production. The concept is applied to software development through controlling the development of tasks. In Agile, tasks could be user stories (the parts) which are subsequently developed into software features (downstream production).

The control of requirements, design, and development inventory is important to maximize project efficiency. A main focus of Agile is to gather requirements, design solutions, develop objects, and test working code at the moment they are needed. If any of these areas are far ahead or behind the others then a critical principle of Agile is being violated. Implementing Kanban helps to keep production of objects at each stage of the project at the appropriate levels.

Here is how it works. The team has a board that looks very similar to a Scrum board. It has columns for stages of development. The stages can look quite different. The images contained in this article are a few examples courtesy of InfoQ. Some look eerily like Scrum boards, don't they?

Regardless of the column names, cards start out in the far left column and move their way to the far right column. The inventory of cards at any one category is controled by open slots within a category. For example, using the image with the multi-colored postIts, when a card is pulled from the To Do column and placed in the Doing column, it leaves an open slot in the To Do column thus a need for more To Do items to work on. Unlike Agile, Kanban does not bundle work into sprints. The flow is constant.

In my opinion, where Kanban is most effective is on larger projects where there are teams that make up "columns" of work. The image with the Waterfall style stages illustrates this best. A large project will likely have teams that make up Basic Design, Detailed Design, Development, and Validation. With Kanban, each team can set their capacity of work by the number of slots they make available. If the Basic Design team has 10 slots available then they are saying they can work on 10 basic designs at one time. Detailed Design may have only 7 slots, thus their capacity is 7 Detailed Designs at a time.

It isn't hard to imagine Agile teams using Kanban techniques to control the flow of work. A simplistic view is to wrap Kanban into iterations and you now have Kanban Agile. There are additional difference between Agile and Kanban and I provided a couple of links at the bottom of the page if you are interested in learning more.

In a previous article, The Convergence of Continuous Integration and Continuous Release, I discuss the merits of deploying to production immediately after code is checked in and all test are run and are successful. Kanban would work exceptionally well in this type of environment.

Whether it's Agile based on Kanban or another style of Agile, there is a consistent attribute of Agile teams; the Agile team needs to be composed of multi-faceted team members who are adaptive. As production velocity shifts across any or all categories of the project, team members should be able to shift into different roles accordingly, i.e. gathering requirements, designing solutions, developing software, and/or testing working code (or coding automated tests). Personally, I think this makes being a software development professional more interesting but it isn't for everyone.

Click here and here to read two good articles on Kanban as it relates to Agile software development.

I'm interested in your thoughts.

Thursday, June 4, 2009

The Emergence of Enterprise Social Computing

It’s exciting to think that true social computing at the enterprise level is right around the corner. That in the near future, the pretense of living intranet sites will be replaced by open, transparent, and constantly evolving intranet sites that grow organically through the contributions of the organizations workforce.

Picture, if you will, an environment where organizational leaders blog to their workforce on a regular basis and readers comment freely. Where those same organizational leaders query their workforce and their answers shape the company’s roadmap.

Add to that the ability of each member of the workforce having the ability to customize their own corporate page; blog on their respective expertise; create communities of like-minded people with whom they work regardless of proximity; search the organization for specific skills, work product, like-projects, structured and unstructured data; where they search for people with specific past experiences related to companies, industries, and expertise. All this with the ability for spontaneous contact through IM, email, VoIP, and web conferencing.

Lastly, this environment is shaped via email postings, blog articles, project and organizational processes, discussion boards, forums, wikis, profiles, and virtually any other type of medium where information is shared.

Today I attended a seminar at Microsoft in Waltham, MA where my colleague, Mauro Cardarelli, as well as other professionals in the SharePoint world, demonstrated real-life implementations of the environment described above. It was clear evidence that this type of environment is ready for prime-time and available to any organization with the vision and discipline to implement it.

The organization that is looking for a fully quantifiable ROI will be late adopters. But I’m certain they will be adopters. There are many aspects of this that are quantifiable but there are many more benefits that are not. One has to know that a fully collaborative organization is a fundamentally more productive one.

Saturday, May 16, 2009

The Convergence of Continuous Integration and Continous Release

There is an emerging paradigm with continuous release that may result in a convergence between it and continuous integration.

The concept of continuous release is not new. However, in most instances where the term is used it refers to deploying and testing the application on a periodic basis - usually nightly - and deploying the entire application. With the latest paradigm, continuous release becomes more like continuous integration - when changes to a component are checked into the version control system, not only does it result in an immediate action of building the component but is extended to include testing and releasing it.

With this paradigm, which I'll call true continuous release, the benefits are:
  • In true Agile fashion it provides immediate user feedback thus adding another layer to the Agile feedback loop.
  • Safety with assuming the risk of integrating smaller and more efficient releases as opposed to a much larger releases of whole applications or large subsets of it.
  • Users are provided the flexibility to elect which features they want to upgrade/install.
A few of the requirements for true continuous release are:
  • Strong feature coverage with automated testing.
  • Tight process with recording component versions.
  • Significant complexity related to the recursive nature of component dependencies.
I highly recommend a white paper on this emerging release methodology that you can find here. It contains valuable information of its benefits as well as implementation difficulties.

Is this the next efficiency model? Is its value limited to companies such as online gaming companies that are structured based on frequent and noticeable changes? Are its complexities too significant that most organizations will pass on implementing it?

I'm interested in your thoughts.

Saturday, May 2, 2009

People, Processes, and Protection

I'm going to drift a bit from my typical subject matter and talk about the importance of managing people and processes for the protection of all involved.

You may wonder in what context the word protection applies. Protection is the obligation of managers to protect their direct reports, direct reports obligation to protect their manager, and everyone protecting their customers and organization. The glue that holds all of the various protections together is a codified set of processes. I know codification is optional, but to be great instead of mediocre, the processes need to be as well understood and unambiguous as possible thus documentation is required.

Example:

Background: A consulting company has a custom software development effort with their best client. The project is highly visible and the client's financial investment makes it high risk as well. Each two week development cycle has a scoped set of features that the development team commits to and at the end of which results in a demo of the working code to the customer.

Manager Protecting Direct Reports: The development manager works with the development team to scope the features to be developed during the current cycle. Each team member commits to one or more deliverables and provides their estimated level of effort. The development manager is doing his/her best to set up his/her reports to succeed by involving them in the scoping, assignment, and estimation processes.

Direct Reports Protecting the Manager: As the cycle progresses, each developer is cognizant of the commitments to the client and all of them know that no one likes surprises. When a feature is bigger than expected or unforeseen obstacles arise that threaten the ability to deliver the scoped feature(s), the development manager is notified as early as possible to allow him/her to manage expectations with those outside of the team.

All Protecting the Customer and Organization: Because the development team does a good job with making the manager aware of the risks as they become known, the manager can notify the customer as early in the cycle as possible. Once the risks are known they are managed throughout the cycle. This allows the customer contact to manage expectations at their end. A byproduct of being disciplined with this process of protecting the customer results in successful projects and enhancing the reputation of the development team and organization as a whole.

Team Value System

Protection and process is all great but those items don't define what is being protected and why were the processes developed. Before processes can be designed and implemented there needs to be a team value system. The value system consists of the things the team views as important principles that are required to be successful.

There are several steps required to crystallize the team's focus of its value system as well as defining who the team is, where it's going, and how it's going to get there. Each of the steps should align with the organization, cost center, department, team, and individual.

1. Vision and Mission Statements: The team needs to know unambiguously why it exists and where it is going. If you don't have them, then write them. You'll be surprised at how much thinking you'll need to do.

2. Team Value System
: A team's value system is essentially the rules of engagement at the individual level. Here is a small example:
    • Team First
      • We constantly strive for balance of skills throughout the team.
      • There is redundancy within the team, e.g. primary/secondary client contacts.
      • Choosing the right resource for a particular task/project is dependent on:
        • Task/Project Context
        • Required Skills
        • Desired career path of team members
    • Communication
      • We err on the side of over-communication
      • We are collaborative with solving problems
      • Conversation is more accurate than written correspondence
    • Customer
      • Everyone is everyone’s customer
      • Super-pleasing is the standard level of service
3. Team Composition: This is the act of documenting how your team should be assembled; essentially what is the strategy with regard to the the required skills within the team and how that manifests itself into the types or categories of people needed to most effectively perform day to day activities. I'd suggest categorizing your current team members into each of the types that you identify to see if you are aligned properly.

Protection Through Process
Once the three steps are complete it is time to create the processes that ensure the value system is efficiently implemented on a day to day basis - without significant dependency on human oversight. Heavy involvement by people to ensure the plan is executed daily is inefficient because of the transient nature of people, the periodic unavailability of people, the expense of human oversight, as well as a host of other reasons. Processes need to be developed in a way where, for the most part, the team runs itself. This paradigm is scalable. When specific people leave the organization or become unavailable for periods of time, the processes don't fall apart. It also leaves people more time for innovations that make the processes stronger.

Teams and its members have good intentions that sometimes go awry. In most cases when things don't go as well as expected it is because the team/individuals don't have a clear idea of their value system and/or poor processes that don't protect them. The idea is to consistently put people in a position to win. In my experience, implementing the tools described here help teams be the best that they can be.

I'm interested in you thoughts.

Wednesday, April 22, 2009

Estimating Probabilities for Delivery (< 10 Tasks)

Delivering an application on the date specified at the beginning of a project is rarely, if ever, achieved; especially if the delivery is expressed as a single date (commitment) instead of a range of dates (estimate). However, there are ways to estimate the probability of delivering on specific dates using relatively simple formulas in just two steps.

Step 1: Calculate the standard deviation for each task
This isn't as scary as it sounds. Here is the formula:

StandardDeviation = (SumOfWorstCaseEstimates - SumOfBestCaseEstimates)/6

Your estimates are based on Worst, Best, and Expected cases, right? I hope so. The denominator (6) in the formula counts as one standard deviation. One standard deviation means that 99.7% of your estimates will fall within the Worst/Best range. Depending on the risks associated with the project or your skill with estimating you may want to decrease the denominator as a buffer. For example, using 2 as the denominator means that 68% of the actual effort per task will fall into the Worst/Best range. Steve McConnell's opinion in his book Software Estimation: Demystifying the Black Art is that accomplishing a 68% accuracy is achievable with practice so you might want to start out using a denominator of 2 or less until your historical data tells you otherwise.

Step 2: Calculate the probabilities for delivery
The next step is to apply the standard deviation to calculate the likelihood of delivery. Below is a table with the probabilities:

Percent Likely Calculation
2% Expected case - (2 x StandardDeviation)
10% Expected case - (1.28 x StandardDeviation)
16% Expected case - (1 x StandardDeviation)
20% Expected case - (0.84 x StandardDeviation)
25% Expected case - (0.67 x StandardDeviation)
30% Expected case - (0.52 x StandardDeviation)
40% Expected case - (0.25 x StandardDeviation)
50% Expected case
60% Expected case + (0.25 x StandardDeviation)
70% Expected case + (0.52 x StandardDeviation)
75% Expected case +(0.67 x StandardDeviation)
80% Expected case + (0.84x StandardDeviation)
84% Expected case + (1 x StandardDeviation)
90% Expected case + (1.28 x StandardDeviation)
98% Expected case +(2 x StandardDeviation)

There are more complex methods for calculating deliver dates for projects that have greater than 10 tasks. I’ll be writing about those in future articles.

I have a variation of the above in the form of a spreadsheet that I use as a template for inputting tasks with their respective estimates. The spreadsheet will automatically calculate the standard deviation and percent likely delivery dates. If you would like me to send it to you feel free to email me at jack@notarangelo.com and I’ll send it along.

Thursday, April 16, 2009

Workflow Patterns

A colleague of mine gave me good advice a while back about blogging. He recommended that I submit articles frequently and that they be short and concise. I'm not sure how proficient I am at writing short and concise articles but my guess is that this one will be the shortest and most concise of those that came before it or will be written after it.

I stumbled across an excellent web site for workflow patterns . The site contains research on workflow patterns conducted by various academicians and are sanctioned by the Worflow Management Coalition. Below are links to the four papers that I think are a must read if you are working with or developing any kind of process-aware software. You can view the content on the web site directly or download a PDF file at the top of each link with all the patterns under that respective category.

Control-Flow Perspective
Data Perspective
Resources Perspective
Exception Handling Perspective

Thursday, April 2, 2009

Agile Feedback Loops

One of the most important aspects of Agile is the feedback loop contained in virtually every aspect of the methodology and the built-in efficiencies related to it. I know I'm being Master of the Obvious, but knowing about a problem immediately and thus fixing it immediately is so much easier and manageable than untangling many problems that were unknowingly compiled over the course of months and with a short amount of time to straighten everything out. Aggghh .. it's stressful just thinking about it. Below are the places in the Agile methodology (or at least the way I have implemented it) where we see feedback loops.

Continuous Integration (CI)
When code is checked in, the application will successfully build or it will not. On my team, we get automated emails after the CI process runs regardless of success or failure so feedback is received in both positive and negative outcomes. The rule is - don't leave for the day until your checked-in code successfully builds and the unit tests succeed. When the CI process fails, it is the top priority of the team to fix it.

Nightly Build and Deploy
Every night the projects that make up the application are built and deployed to a distributed environment. During this process the compilation, automated unit tests and automated QA tests should succeed. If anyone of those processes has a failure, then, just like with the CI process, it is the top priority of the team to fix the failure.

Daily Stand-up
Every day the project team meets for 15 minutes where each member tells the their team members what they worked on yesterday, what they are working on today, and what obstacles they have. As a result, missteps in priorities is only a day away from being corrected and information about what someone else is working on that is pertinent to a developer is communicated every morning.

Sprint Retrospective
At the end of every two week sprint, the team discusses what they think went well and what went not so well. This feedback provides the necessary information to tweak our processes accordingly. Little by little our processes gets stronger.

End of Sprint Demo
This is where the rubber meets the road. At the end of every two week sprint we have an immovable demo to the client. What does our customer think about our work? Did we get it right or wrong or a combination? What insight is gained about the application as a whole as a result of this sprint's development and what is the effect on the priorities for future development?

I was in a client demo yesterday for Sprint 12 - thus it was my 12th demo for this particular project with 4 more to go - when I had a somewhat out-of-body experience. As we were nearing the end of the demo where I was peppered with questions by a room of 10+ client stakeholders asking "will it do this? will it do that" of which my response was "well let's see" (and in virtually all cases the app performed well), I mentally became detached from the demo process and saw the entire project life cycle in my minds eye and was stunned.

That's when I said to the group "Can you believe how well this is working? This is amazing." And, that's when they started making fun of me by asking me if I wanted to be carried around the room. By the way, having a demo every two weeks with your customer promotes a positive bonding experience (assuming the demos are successful).

The reason for my impulsive utterance was that after developing software for the better part of the last 15 years (a lot of which was not using Agile), I'm still stunned how well Agile works. Of course you need high quality developers too but the constant feedback loops and addressing issues immediately instead of during UAT is directly related to successfully demo-ing an application 12 times with minimal glitches.

Saturday, March 28, 2009

Why Is Software Estimation Always In the Backseat?

I don't understand why software estimation doesn't take a more prominent role in software development.

Virtually everyone who's livelihood is related to software development has experienced projects where team members have worked a zillion hours, the project went over budget, had features ripped out to meet a date, was delivered with poor quality, etc.

Here are some data to provide context into our experiences in relation to the software development space as a whole (thanks again to Steve McConnell's book "Software Estimation: Demystifying the Black Art"):
  • Many studies have concluded results that state 25% of projects will be canceled, 25% will be on time and within budget, and 50% will be late and/or over budget.
  • A project with 10,000 function points, which is a mid-sized project, has a 1% chance that it will be delivered early and a 20% chance that it will be canceled. As the number of function points goes up so does the failure rate.
  • On average, a late project is 120% late and an over budget project is 100% over budget.
It is likely that the percentages for late/over-budget is only a piece of the picture and the actual experience is much worse. When a project is 120% late and/or 100% over budget, there is severe urgency to deliver. When that occurs functionality is often stripped out of the original scope and teams are forced into a death march to get the project done. Most teams do not pay overtime so whether a developer works 8 hours or 18 hours, the effect on the budget is the same. As a result, the overage stats do not reflect that what is delivered is likely less than what was scoped and the hours applied are far more than the budget overrun indicates. It is also reasonable to assume that in that environment design time suffers causing a more difficult application to maintain, extend, and scale.

The moral of the story is that the our industry is dysfunctionally addicted to under estimating our projects! One major reason for this, believe it or not, is expert judgment. There are a lot of smart people with loads of experience in our industry and many will estimate tasks, and thus projects, based on gut instinct instead of quantifiable data. This is by far the least reliable estimation technique.

In the next article, I'll provide a few methodologies that you can use that are fairly easy to implement and will increase your likelihood of success.

Friday, March 20, 2009

Software Estimation v.1

I recommend to everyone who is involved in any way with software development to read the book "Software Estimation: Demystifying the Black Art" by Steve McConnell. I laughed, I cried, it changed my life. Well maybe not the first two, but it did change the professional side of my life.

There's so much to software estimation that one article cannot do it justice; especially if the objective of the article is to keep it short and concise. This first of probably many blog entries on software estimation will be dedicated to distinguishing between an estimate and a commitment. This is a vital point because the two are often used interchangeably.

Estimating is an unbiased analytical process. With regards to software development, the objective is to approximate the amount of time and/or cost to develop software to solve a problem or satisfy a specific need. As an approximation, an estimate should be communicated as a range, e.g. "it is likely that this project will be delivered in 3-5 weeks".

A commitment is much more definitive. It is an agreement to deliver specific functionality on a specific date for a specific cost, e.g. "the project will be delivered in 6 weeks."

Related to a commitment is a target. A target is a biased process based on the goals of the business, e.g. "We need this software delivered by June 15 to demo at the premier industry convention!".

In my experience, more often than not estimates are expressed as commitments which are influenced by the business driven target.

Communicating estimates as a single point number vs as a range is misleading (in most cases unintentionally). Every software development effort is, in actuality, an invention and an invention cannot be guaranteed to be completed on a certain day for a certain cost. As we all know, our estimates are not 100% accurate. However, if we communicate estimates as a single point number we are implying 100% accuracy.

Developing high/low cost estimates is fairly common and as you can probably guess I recommend communicating both to your customers. There are many techniques to quantify, thus making credible, high/low estimates. Although estimation by intuition is one estimation technique - and is probably the most frequently used - it is the least reliable. I'll be writing about some of many techniques I garnered from Steve McConnell's book in future blog entries.

On the other hand, dates are rarely communicated as high/low estimates. Typically they are communicated as commitments (single point date) which often result in very long days for developers, negotiating with the client to deliver at a later date, removing features from scope, providing a poor quality product, or a combination of some or all of those. I have a question for you, when was the last time you presented your delivery dates as probabilities, e.g. "there is a 25% probability of us delivering the software in 11 weeks and a 98% probability that we'll deliver it in 15 weeks."? The techniques for quantifying those probability statements will come in later articles too.

I'm interested in hearing about your issues/resolutions to software estimation problems.

Saturday, March 14, 2009

Creating and Maintaining a Sense of Urgency

A consistent sense of urgency is one of the separators of great teams from all of the others. Urgency promotes teamwork, focus, efficiency, collaboration, pragmatism, vision, and all the other important characteristics of a highly productive team and project.

I’ve been on projects where the management team wanted desperately for our team to have a sense of urgency but was unable to create it – never mind maintain it. Assuming a project team is made up of talented people who enjoy what they do for a living then the reason for the lack of urgency falls on management’s inability to provide a conducive atmosphere to instill it.

Knowing the ingredients that creates a sense of urgency is the hard part. Actually creating urgency is simple. All it takes is a disciplined approach to process by management.

The key ingredient are:

  • Short and tightly focused goals that roll into medium term goals which roll into longer term goals.
  • Individual accountability.
  • Visibility into the goals of the team and it’s individual members.
  • Knowing the dependencies each member of the team has on one another.
  • Everyone’s involvement with process improvement.

Every project already knows the long-term goal – deliver the product that is mutually agreed upon between the developers and customer.

Agile does a great job of providing the short and mid-term goals via the daily stand-up and sprints, respectively. At the beginning of the sprint, the development team provides a scope for the sprint, which is usually between 2 and 4 weeks at the end of which is a demo to show what was accomplished. What rolls into the sprints are the daily stand-ups. Each member of the team provides an update on what was accomplished yesterday, what’s planned for today, and any obstacles they have.

For as far back as I can remember, my father has always emphasized that “if you take care of the little things then the big things will take care of themselves.” The daily stand-up epitomizes this philosophy. If we strive to consistently accomplish our goals on a daily basis, then it’s reasonable to assume that we should accomplish the goals for the sprint. The same holds true for the relationship between the sprints and the project as a whole.

The tightly focused goals that revolved around the daily stand-ups and the sprints creates an environment where creating and maintaining a sense of urgency is built-in. The best part is that it is self-maintaining. It doesn’t require constant reminders and direction from management. The best processes are those that work on auto-pilot where a manager’s job is to nurture it, make sure the team stays disciplined to it, and looks for ways to improve it.

That begs the question – as a manager, how should my sense of urgency be created and maintained? That’s a blog for another day.

Friday, March 13, 2009

Test Driven Development (TDD) Tip

I'm a big fan of Test Driven Development (TDD). TDD is an Agile methodology where automated unit tests are developed before the feature is developed. By coding automated test before actually coding the feature, the developer is forced to think through how the feature should work. If you've ever developed manual test scenarios you know what I mean. Many questions come out of the wood works of one's mind when going through this process. Another advantage of TDD is that once the tests are developed the development mission become clear – code to the tests and once they all succeed you are done and ready to move on to the next item in the prioritized list of features.

However, TDD requires a bit of procedure built into it – especially with regards to Continuous Integration (CI) and deploying working code on a nightly basis. Both processes are vital to the success of a project. I encourage my team to check in working code daily; ideally multiple times per day. With each check-in the CI server builds the project(s). If the project fails to compile or the unit tests fail then the build is considered failed. At that point, everyone on the team gets an email stating the failure and fixing it become the top priority. This also applies to the nightly deploy process which compiles the code, runs the tests, deploys the project into a distributed environment where automated QA scripts are run. A failure at any point results in top priority work for the team the next day.

In case you haven't already deduced the problem, if a feature takes a week to develop and all the automated tests are written up front, then the CI and deployment processes will be in failure mode most of the time. Thus, there are three solutions:

  1. Iteratively develop the tests. For example, in a single day a developer may create a few classes, some properties, and a few methods. Tests should be written to accommodate what will be completed today not for the entire feature (this is my preference).
  2. The developer could develop all the tests with an "ignore" flag so the tests don't run.
  3. Use a methodology other than TDD.

I'm interested in your thoughts and alternative approaches that you've seen.

Saturday, January 17, 2009

The Miracle of Agile

I am a disciple of Agile as a software development methodology for the simple reason that it was brought to Engineering teams through divine intervention. That statement might contain a bit of hyperbole but only a bit. Agile has revolutionized the way in which software is being developed by many teams across many industries though it is still unknown or a mystery to many more.

What is Agile? Agile is an iterative approach to software engineering who's precepts are: collaborative teams consisting of cross functional members; frequent validation of requirements, designs, and implementations; self organized teams; and, unambiguous individual accountability.

How is Agile different?
I'm sure everyone reading this has heard of processes such as requirements gathering, analysis and design, development, QA testing, and user acceptance testing. Agile as well as many of the tradition approaches, Waterfall being by far the most common, embrace these principles. The differences lie in the implementation of those principles. Where Waterfall will attempt to describe the the entire application upfront through documentation which is then followed by development, QA testing, and user acceptance testing, Agile bundles those principles into iterative life cycles called "sprints". Sprints are of a fixed duration, commonly between 2-4 weeks, and repeated over and over again until all the features are developed, the project budget is exhausted, or time runs out.

Plan-Driven (Waterfall) vs Value-Driven (Agile)
The way in which the principles are implemented is the manifestation of their difference in philosophy. Where waterfall is plan-driven, Agile is value-driven. A plan-driven methodology is heavy on documentation and strives to fully describe the application before developing it through requirements, analysis, and design. That process provides a framework to methodically develop what's been documented. Typically there is a gargantuan project plan that is associated with the effort and everyone marches to the plan with little regard for course corrections and re-evaluation.

A value-driven methodology stresses an empirical engineering process using an inspect and adapt approach with frequent feedback loops, i.e. sprints. The reason for this is that Agile believes that requirements gathering, analysis, design, development and testing should happen together when the feature is ready to be implemented. This approach provides flexibility with project changes such as feature deprecation, adding new features, and changing the requirements of existing features. As these events occur in a Waterfall project the documentation becomes cumbersome to maintain. The most efficient time to write about it is when it's time to develop it. During an Agile project, it's easy to make course corrections such as changing requirements as a result of what's been developed previously or a reconfiguring development priorities because of extenuating circumstances.

Although it's comforting to know that so much thought went into determining what the customer wants and how their requirements should be implemented, there are significant inefficiencies with the level of detail that goes into the upfront analysis. The reasons are simple, some of which are:
  • Users aren't always clear in their own minds what they want and how they want it.
  • Large documents are not handled well by many people. They tend to be too abstract for people to fully grasp.
  • Things inevitably change in the minds of many users once they can see and feel their requirements implemented.
  • The business environment often changes and directly impacts the priorities of the project.
In my mind, it makes perfect sense to get your customer to see and feel the application features as soon as possible. This doesn't mean abandoning requirements, analysis, and design. It just means that upfront planning should be limited to:
  • Project Initiation: Planning the sprints, project infrastructure, team members, communication matrix, etc.
  • Developing a Prioritized List of Features: This is a list of features, prioritized by importance, with user-stories on how the features will be used.
  • High Level Architecture: A high level architecture should be developed. This phase determines things like the application tiers, technologies employed, thick or thin client, etc.
Once those steps are taken then development should start. I prefer two week sprints at the end of which my team gets immediate feedback from the customer. Catching mistakes with interpreting the user requirements, or users realizing that their requirements need to change is far cheaper to address at this point in development than at the end of the project during user acceptance testing. The recurring customer demonstrations also builds user confidence because they see progress. From the developer perspective, sprints provide a clear mission and the immovable customer demo at the end provides a constant sense of urgency.

Why isn't everyone doing it?
There isn't a single answer as to why everyone isn't using Agile but the most common is - you guessed it - resistance to change. Here are a few reasons for resistance:
  • The Devil You Know: People are comfortable with what they know and uncomfortable with what they don't know. In my experience, once the change is made, most team members see more similarities to their previous approach than they thought were there otherwise. The apprehension of the change is greater than the actual change itself.
  • Personal Fear: Can I do this? Will I like it? Will I learn it fast enough? Will I look foolish? How will my job be affected?
  • Risk Aversion: The chances of switching methodologies without feeling some amount of pain is low. It will take at least one, but probably several, projects to change behaviors where Agile processes are implemented and feel natural. The speed of the transition is dependent on the fervency and commitment of the transition evangelist.
  • Existing Team(s): Agile is highly collaborative and tends to minimize attention on documentation, thus an inherent lack of detailed specifications. If your projects rely heavily on coding to specs, then the transition to Agile could be challenging. However, if your team is already working from prototypes, has close involvement by the customer, and is highly communicative, then you are already working in an Agile-like style and the transition will likely provide change in the form structure not in approach.
  • The Iron Triangle: There is contention with each side of the iron triangle - features, cost, and schedule - and it is virtually impossible to fix commitments to all of them simultaneously. Agile makes clear that commitments can only be applied to a maximum of two of the sides. One side always needs to be fluid. For example, company x has a demo at a convention in 3 months where it must show y set of features, therefore the the schedule and scope are fixed so the cost needs to be variable. With Waterfall projects the commitments to date, scope, and budget are typically, and irrationally, fixed at the beginning of a project. This makes people feel comfortable. However, in reality one or more of those commitments will inevitably be violated because it is unlikely that enough is known to allow for committing to all three.
Although it is unlikely that Agile was brought to us through divine intervention, the genius in it's approach makes me wonder what took so long to get here.
Web Analytics