Friday, December 10, 2010

Could the Unthinkable Happen to Microsoft?

This year has been a pivotal one for the OS business. A few key stats caused my mind to fast-forward to a possibly unthinkable scenario –- Within the next 5 years, Windows will not be the dominant OS, and in fact is set up to be this millennium's OS2.

Yikes.

You’re probably thinking; “How could Jack possibly think that? Has he lost his mind?”

The latter question is hardly up for debate. However, the aforementioned radical scenario is far from certain, but in my view, certainly plausible. Here are some stunning data and some logic immediately thereafter to make my compelling argument. I end the story with all the reasons why Microsoft should maintain their coveted pace in the OS market.

How Microsoft Could Lose Their Place

From a Morgan Stanley study published in April 2010, there are four stats that I think support my dire Windows scenario (not prediction).

1. By 2015, there will be more mobile than desktop Internet users.

image

2. Social network users have surpassed email users indicating the preference to communicate in the context of online collaboration.

image

3. People are using their mobile devices more as computers than as phones.

image

Couple those stats with these by Gartner showing the growth of Android and Apple and the decline of Windows in the context of mobile devices.

image

Mobile technology is moving at a break neck pace that consumers and businesses alike are adopting virtually immediately. More and more, those mobile users are using their mobile devices in the same way as their desktop computers. With the advent of the iPad and it’s competitors, one can fast forward to a day where a user’s mobile device IS their desktop by virtue of plugging their device into a docking station to enable additional sets of ports, a bigger monitor and/or multiple monitors, a keyboard and mouse, and to charge their battery.

One can also make the argument that over the next 2-3 years, the mobile OS business will trim down through attrition to only a few big players; right now the trajectory favors Apple and certain flavors of Android. A side note … doesn’t Android’s story remind you of Linux where at one point there were lots of players that ultimately trimmed down to a select few best of breed?

With only 4%of the mobile market, is Microsoft too far behind to catch up in the requisite time? Could metaphors introduced by Android and/or Apple provide well known and soon-to-be intuitive features where Microsoft could be frozen out of the mobile OS market and thus by extension the desktop market?

Why Microsoft Is Likely To Maintain Their Place

Here are some reasons why the unthinkable Microsoft scenario may not happen, many of which should be attributed to a colleague of mine who counter-pointed me at every turn as I laid down the potential demise of Microsoft.

Android, by nature of it being open source, is not a single, consistently deployed OS but in actuality many flavors of the same OS with a different feel to each one. By contrast, Microsoft, as usual, will continue to provide a singular vision of it’s Windows mobile OS providing a consistency across devices.

Businesses and consumers alike use their devices to be continuously connected and to easily collaborate with one another. Outlook, Microsoft’s email client, is well known and well liked by it’s users making adoption by primarily email users highly likely.

So you say that with a current 4% market share, Microsoft is too far behind and too late the to the game?

Well, the throw away, appliance-like, aspect of mobile devices means that users will be upgrading to new devices every two years or so. As a result, the ability to enter the market with a better mouse trap and convert audiences from one OS to another is constantly available.

My colleague also makes the point that the rapid nature of new, unforeseen, and game changing devices being introduced every 5 years  or sooner requires a constant introduction of new OS’s for the foreseeable future.

We’ve gotten used to Microsoft letting others break ground then usurping control of said ground. However, with the speed at which mobile devices are being adopted and replacing desktops, one has to wonder if waiting has put their place in the OS market in jeopardy.

Dreamforce 2010

This was my first Dreamforce and I have to say that Salesforce really knows how to put on a show. Nonetheless, it was a good take and worth the money and time. There was lots to see, plenty of people to talk to and a good number of significant Salesforce announcements . So here is my summary as it relates to app dev.

The biggest news is their mega jump into cloud-based application development with the announcements of the acquisition of Heroku, partnership with VMWare to offer the VMForce platform, and their abstraction of the Salesforce database platform into a product called Database.com.

Clearly, Salesforce’s slew of recent offerings is their foray into the competition for cloud-based developers joining Amazon EC2, Google App Engine, and Microsoft’s Azure.

Salesforce is playing catch up. Amazon and Google leveraged their infrastructures for development and hosting services two years ago. Microsoft was talking about Azure soon thereafter and unveiled it for general release earlier this year.

However, with the acquisition of Heroku, a hosted application development platform for Ruby on Rails developers, and their 107K hosted applications, Salesforce has bought a development community. If you are going to be late to the game, they did a great job of catching up.

VMForce, which is a Salesforce/VMWare partnership, was unveiled this past summer and is geared towards the enterprise Java development community. Like Heroku, VMForce is a hosted application development environment.

Both Heroku and VMForce provide developers with the ability to code locally and publish to the cloud.

Database.com is an abstracted version of the Salesforce database platform. The IDE is completely web-based and all development happens in the cloud. Like all demos, it looks pretty slick but time will tell whether it is scalable and its performance is up to the standards that application developers and their customers expect.

Salesforce has bet big on cloud-based application development. It’ll be interesting to see how it all shakes out between them, Google, Amazon and Microsoft.

P.S. A funny thing happened to me on my first day back from Dreamforce. I logged into LinkedIn today and saw a note in my message box from Microsoft offering me a trial version of Microsoft CRM Online. Weird coincidence ;)

Monday, November 15, 2010

Humphrey's Requirements Uncertainty Principle

Watts S. Humphrey (July 4, 1927 - October 28, 2010) passed away over the last few weeks and as a memorial to him I thought I'd post an article about his Requirements Uncertainty Principle, which is a cornerstone of Agile's approach to defining system requirements.

Watts Humphrey contributed significant thought leadership in the software engineering process and one of the principles he states is requirements are inherently uncertain. To quote and excerpt from his book "A Discipline for Software Engineering":

"This creative design process is complicated by the generally poor status of most requirements descriptions. This is not because the users or the system's designers are incompetent but because of what I call the requirements uncertainty principle:

For a new software system, the requirements will not be completely known until after the users have used it.

The true role of design is thus to create a workable solution to an ill-defined problem. While there is no procedural way to address this task, it is important to establish a rigorous and explicit design process that can identify and help to resolve the requirements uncertainties as early in the developmental process as possible."

Back in 1995 when this book was written (2 years after the founding of Scrum), Humphrey recognized that the software engineering process was broken and that repeated attempts at having a requirements document comprehensively describe a proposed system was met with failure many more times than with success. We are 15 years removed from this publication and there are many development teams still searching for this fictitious Holy Grail!

As we all know, Agile addresses Humphrey's Requirements Uncertainty Principle by:
  1. Capturing what users' want in user stories
  2. At the time of development, collaborating orally and through whatever documentation is required to fully understand what is to be developed
  3. Designing and developing features to address the requirement(s)
  4. Then immediately thereafter providing what's been developed to the user(s) so the requirements can thus be fully known.
  5. Repeat
Why wait till the end of a project or many months after a feature has been developed to get feedback from our users. It is always most efficient to make the inevitable, yet unforeseen, changes to features immediately after they have been introduced.

Sunday, October 31, 2010

The Uncertainty Principle in Software Engineering

Way back in 1996, three computer scientists, Hadar Ziv, Debra J Richardson, and Rene Klosch, wrote a paper that should be better known than it is. It's called The Uncertainty Principle in Software Engineering. Many have shortened the reference to Ziv's Uncertainty Principle. It states that "uncertainty is inherent and inevitable in software development processes and products". This principle sheds light on why Waterfall is well intentioned but flawed as a development methodology and Agile is better suited to deal with the uncertainty in software development.

Ziv's Uncertainty Principle models uncertainty in software engineering using Bayesian Belief Networks. In the software development world, Bayesian nets are most commonly known in its use in search algorithms used on large volumes of text and hypertext.

The authors focused on five areas of software engineering to demonstrate uncertainty;
  1. requirements analysis
  2. transition from requirements to design and coding
  3. software re-engineering
  4. software reuse
  5. software testing

The authors also provided three example sources of uncertainty of which below are my paraphrased descriptions of each:
  1. Uncertainty in the problem domain: The problem, for which an application is developed, exists in the real world. We all know the real world has many uncertainties, many of which are not, and/or cannot, be addressed by the application being developed.
  2. Uncertainty in the solution domain: Building the application itself introduces uncertainty beyond the uncertainties in the problem domain. The example used in the paper is the act of debugging of race conditions from concurrent use. There is uncertainty in the exact conditions that cause it as well as how to observe the condition for reproduction. The authors use Heisenberg's uncertainty principle as similar a affect where by the mere attempt at observing an environment will change it. If you've had to debug a problem with concurrent use, then I'm sure you see this connection.
  3. Human participation: Human involvement introduces uncertainty through business logic built into the application. Business logic coded into an application do no typically address explicit uncertainties. We code mostly based on certainty - not uncertainty.
The point is that attempting to address in advance all of the potential situations and conditions that will be faced in production is futile. The most effective way of dealing with the inevitable uncertainties is to get the application in the hands of the users as soon as possible and let the real world tell us what needs to change. This doesn't mean production alone. It means putting the application in the hands of users who can use the application in real world circumstances.

Waterfall makes the noble, yet inherently flawed, attempt at making the application as rock solid as possible on paper before commencing development. Agile's principle of frequent inspection requires that the team deliver working code frequently. Agile accepts and embraces the fact that once in production, the real world will show that significant imagined truths will be deemed either false or incomplete thus requiring non-trivial modifications and enhancements. These changes are most effectively implemented when features are introduced and not when the application is fully developed.

Saturday, October 30, 2010

Another Explanation of Story Points

Story points seem to be the most misunderstood concept within Agile so I'm going to take my stab at helping others to understand what they are and their importance.

The typical question is: Why estimate user stories with these ambiguous and arbitrary things call story points when I can use hours which inherently make more sense to me?

Hours may make sense over story points at this very moment, but hopefully I'll be able to change your mind after you complete reading this article.

The major impediment to using hours is the variability between people and teams. I recently heard Jeff Sutherland state that a Yale University study has shown that what will take the best developer one hour to complete, the worst developer needs 10 hours to complete. When comparing best and worst teams that grows by an order of magnitude so an hour of the best teams translates to 2000 hour for the worst teams. The variability of hours is a real impediment and is typically a huge time consuming task on most projects.

Story points simplifies the estimation process by taking the developer/team variability out of the process by assigning a level of complexity to user stories. Many in the Agile community use a subset of the Fibonacci sequence as the units of measure; 1/2, 1, 2, 3, 5, 8, 13, 21, 34 and 45 per user story respectively. Think of story points as a more flexible version of assigning estimates as small, medium or large.

The story point estimation process begins by the team selecting the smallest user story and mutually agreeing on its complexity by assigning it a number from the Fibonacci subset of numbers. This user story is called the "keystone", meaning all other user story estimates are based on relative complexity to the keystone user story.

The byproduct of this is that upon sprint completion the team is able to report to the Product Owner in a more concise measurement of productivity - number of story points completed. The Product Owner sees things through the lens of project stories, features, etc. It makes more sense to communicate velocity, i.e. sprint productivity, in the form of story points than hours. Story points translate directly to what is manifested on screen when the working code is demonstrated and hours do not.

Here is an example of using hours as a measurement of sprint effectiveness. A team of four developers work on a project for a two week sprint. At 40hrs per week, the four developers have 320 hours of available time to work during the sprint. However, once the sprint is complete, their velocity shows that they only accomplished 60 hours of work based on the estimated hours assigned to the user stories they completed. That is a misleading depiction of productivity that leads to irrational conversation about whether the team is working hard enough.

Using story points, the team's velocity is expressed in terms of productivity related to the complexity of the tasks completed. For example, let's say the Product Owner knows in advance of the sprint that the team's goal is to complete 35 story points. At sprint completion, the effectiveness of the sprint is measured against the sprint goal and previous sprint velocities. Story point velocity is used to view the overall project productivity in terms of acceleration/deceleration, i.e. is velocity increasing or decreasing.

The Product Owner can also easily translate velocity into an estimated release date more easily than with estimated hours. As an example, if there are 350 total story points remaining in the product backlog, and the velocity of the team is at 35 story points per sprint, then the product backlog will be exhausted in 10 sprints.

Lastly, estimating projects is not only faster but more accurate than estimating in hours! Jeff Sutherland references two experiences related to this in one of his blog articles (read it here).

I hope this clears up what a story point is, why we use them, and provides a compelling reason to transition from hours to points.



Friday, July 30, 2010

Performing a Review of a Software Development Team

I've had several conversations this week about how difficult it is to provide reliable and consistent software development. Whether the team is an internal IT team, a product team, or a team of consultants, software development is rarely a pretty sight. It is sort of inline with the "making sausage" analogy. However, it doesn't have to be ugly so to help I thought I'd provide a checklist of attributes that every development team should consider.

Development Infrastructure

Each developer should be working in a virtual environment. Whether those VMs should live on the users computer or on the network is a debatable point. I believe the environments should live on the network to ensure they reside in the safest place possible. Residing VMs on the network also removes the dependency of a workstation being online.

Each developers’ local version of source code and folder structure should be identical so if someone were to go from developer to developer the location and folder structure would be identical.

There should be an integration environment to which a build is deployed at least once a day automatically. If the build fails the entire teams should be notified. At least one person should get an email when it succeeds. I’ve been in situations when everyone thought the build was succeeding reliably when in actuality the build machine was down. The integration environment should be identical to the production environment, e.g. web server, app server, database server. When the application is deployed it should tear down the servers to a base environment and rebuild it from scratch. At the very least, this environment provides developers with the ability to test their code in an environment that replicates production. Ideally, there should be automated QA testing that happens with each deployment. It doesn’t have to be full regression testing; an automated smoke test goes a long way.

Continuous integration is a must have. Finding out immediately when a developer has broken the build is vital. It’s so much easier to fix issues as they occur than to untangle them weeks after many developers have introduced their own flavors of bugs.

Development Methodology

This one is a big deal to me because I think very few shops actually have a development methodology. My evidence is anecdotal evidence from being a consultant talking with many clients and prospects and through interviewing software engineers. Most profess to having a methodology which turn out to be more of a set of development procedures than a methodology.

One definition of the term methodology is “the methods or organizing principles underlying a particular art, science, or other area of study.” Organizing principles is the key phrase. Organizing principles are more than a set of procedures. Your methodology should be well understood and those using it should evoke confidence that when the going gets rough that your organizing principles will protect you - as long as you stick to them. Abandoning your organizing principles, in whole or in part, when times get tough results in pain for you, your team, your management and your internal/external customers. In other words – everyone.

Estimation Techniques

My own little survival handbook is called “Demystifying the Black Art of Software Estimation” by Steve McConnell. In it, the author states that studies have shown that estimates are, on average, 30% of what the actual level of effort will be. Thus, know that your developers will significantly and consistently underestimate tasks. If you have never read a book on software estimation then it is a must.

The absence of formal estimation techniques could be the biggest failure point in software development. Communicate estimates in the form or ranges not in single values; e.g. 4-6 weeks. Include all the ancillary tasks such as user documentation, technical documentation, project management, etc. When developing the schedule consider vacations and holidays. And lastly, stand strong in the face of resistance from those who want the job done cheaper and/or sooner. You can negotiate rate, features, etc. but you can’t negotiate how long it’s going to take by magically lowering the number of hours and expecting good things to come from it.

Strong Project Management
A project manager, depending on the organization, is either just another resource keeping track of the project plan, or a leadership role on the project. The latter role requires the project manager to be an oracle and captain at the helm and is the one I prefer. “The captain” reference refers to that person as a significant influencer of decisions and direction. The oracle reference  refers to the ability to recognize patterns to identify risks and potential pitfalls. Project management is a combination of art and science. It also requires a strong personality who is willing to give bad news as soon as it is known. This isn’t easy for most people to do and lots shy away from it and futiley hoping things will work out.

Team Composition

We all want the best an brightest. But what’s often overlooked is how the team is composed. Being smart isn’t enough. Your team needs to be filled with smart people who fill specific roles and view the world in a specific way. People can be big picture oriented or not, detail oriented or not, organized or not, ambitious or not, etc. It is vital to architect your teams composition, meaning, defining how many of what types of people you want/need on the team. Teams need a variety of types to be successful. Know what those types are and fill them accordingly.

QA Process

The business should write the test plans and ideally they should be written before development begins. If you are an Agile shop, each user story should have test scenarios. There should be automated QA testing. If there are budget constraints there are free tools out there to leverage (Selenium and NUnit combined with Cruise Control will work fine in most cases).

Stakeholder Roles & Cross Functional Relationships

The business users should be heavily involved. My mantra to customers when negotiating for their time is “the quality of the end product correlates directly with the level of your direct involvement.” Given that they are often making a significant capital expenditure, they tend to get involved to whatever degree they are needed. The reality of the outcome as a result of their involvement, or lack thereof, just needs to be communicated.

Resource roles should be clearly defined at the beginning of the project. There should be no ambiguity with what’s needed from the business users.

Cross functional relationships should be nurtured during and outside of projects. it is important that the business and technical teams know that they are partners and together they will be succeed or fail.

Communication

The glue that hold all of these together individually and as a whole is communication. Everyone should be communicating to everyone often and transparently. When issues occur, make sure everyone knows. I’m not saying that there shouldn’t be a communication protocol. There should be. However, when Engineering disagrees that a bug found by QA is actually a bug, the engineer who developed the feature should have direct access to the QA engineer who logged the bug and the product manager who articulated the feature.

Customers should be involved on a daily basis (with Agile via the daily standup). This makes the weekly status report nothing more than a summary of what the customer already knows. Demo working code frequently to get feedback from the customer. At least one engineer should be in the demo to hear directly from the customer and be able to ask questions accordingly.

Transparency can be scary because we are afraid of the repercussions of allowing the customer to see our problems as they occur. What I’ve found is that customers are reasonable business people who understand the ups and downs of projects. The surprises are what get them angry. Transparency eliminates surprises and keeps everyone in the loop with everything that is happening with the project. Under this umbrella of openness the cross functional team inherently act as a unified team navigating the project towards a mutual success.

Sunday, March 28, 2010

Why Cloud Computing Won’t Lead to Dumb Terminals

It’s easy to fast forward to a time where cloud computing will once again lead to an era of dumb terminals. However, I don’t believe that to be true.

Back when dumb terminals were the norm, the world of computers was quite different than it is now. Most importantly the cost of computer systems was staggeringly higher than it is now and the pace of advancement was much slower.

Moore’s Law is at a point where our individual workstations have processing power equivalent to high end servers of only a few years ago. It only makes sense that horsepower at the workstation provides for a promising future for distributed computing and at some point the ubiquity of grid computing. Leveraging millions, and even billions, of computers as opposed to a much smaller set of monolithic servers is a more likely model.

A server-based model provides for central points of of failure; distributed and grid computing does not.

With cloud computing, the proximity of our data is less important (and likely to be less obvious) but it does not necessarily point to a future where my laptop plugs into the network as a throw back to mainframe days.

I believe it actually points to a time that compares more closely to peer to peer computing, similar to Groove’s replication of data across a network of computers. This eliminates central points of failure, promotes greater power (horsepower and  volume) by leveraging large numbers, and is more conducive to offline computing and synchronization.

We are all guessing at this early stage of cloud computing, on which every server and workstation is merely a node, so I’m interested in your perspective.

Wednesday, March 10, 2010

The Proximity Problem: A Cloud Fable Part II

The Internet has provided opportunities to advance efficiencies like no other time in human history. With proximity becoming less important every nanosecond, collaboration between parties half way around the world is now possible on an on-demand and real-time basis. I’d like to think altruistically and see the world as a utopian Garden of Eden where everyone is working together for the betterment of mankind, but alas, that is not reality - especially in the trade of underground contraband.

Manuel is part of a South American drug cartel. He and his peers view their operation like any other multi-national conglomerate. The objective is to increase revenue and profits. Just-In-Time Manufacturing is as real here as in any other enterprise. The concept of Kanban, the infamous innovation from the Toyota Production Process (TPS) that has been replicated in so many industries is a fully blown implementation in Manuel’s operation.

The United States, Manuel’s monolithic neighbor to the north, is not only his biggest and most profitable region but also his most tenacious and sophisticated foe. Travel to the US for business meetings has never been risk free. Since 9/11, however, the risks associated with in-person business meetings whether in the US or at home, have increased dramatically.

Like any high performing enterprise, Manuel’s is innovative and constantly striving to stay ahead of their competition. With the risk of international travel increasing for both Manuel and his customers, the cloud has been a boon to removing proximity as an attribute of doing business.

One innovation is the use of video conferencing as a way of entering into business agreements between Manuel and his customers. Not only does video conferencing mitigate the risk of in-person meetings but also provides the byproduct of recording the meetings thus acting as a binding contract between the parties.

When choosing his provider, Manuel knew that he could not choose one from the US. The Patriot Act took that option off the table immediately. So he opted for an off-shore provider. One domiciled in a sympathetic nation. In addition to the typical criteria such as functionality, bandwidth, and up-time, Manuel also considered things like extradition and privacy laws of the host nation.

Manuel’s organization embraced the cloud in many ways beyond video conferencing. Coupled with video conferencing, his cloud-based providers for storage, ERP, CRM, email, and banking, Manuel could conduct business anywhere and with anyone. Moving his operation was limited to moving people – not infrastructure and data.

What Manuel did not, and likely could not, consider was the complete infrastructure of his providers.

Cloud-providers potential Achilles heel is redundancy and failover. Customers of cloud providers expect near 100% up-time. The best providers fall into the Five-9s category (99.999%). To provide that kind of uptime there needs to be a sophisticated infrastructure with the same goal as Manuel – minimize proximity as a central point of failure. What Manuel did not know was that several of his providers contracted with US firms for redundancy, load balancing, and disaster recovery.

The one flaw in Manuel’s planning has now come home to roost.

The US DEA has indicted Manuel and a Chicago-based group, which was made up of US citizens, green-card holders, and illegal aliens, for illegal drug trafficking. After years of undercover operations, wire tapping, and packet sniffing, the DEA uncovered a recording of a video conference contract from a data center in Tucson, AZ. The Tucson data center was a tertiary load-balancing location for Manuel’s Chinese conferencing provider. Apparently Manuel struck a big deal over the Chinese New Year when many Chinese expatriates from around the world used his provider to video conference with their families back home. The provider experienced a high volume of calls when Manuel entered into these negotiations which kicked in the tertiary site in Tucson based on call volume and proximity of the callers in the system at the time – the Tucson data center was the most central of those data centers between Chicago and Manuel’s Venezuelan villa.

The DEA has its work cut out for it. Manuel is busted but his nation is unfriendly to US interests. The domicile of the video conferencing provider is likely to be equally uncooperative. If the US is lucky enough through interrogations or packet sniffing to intercept data related to Manuel’s other cloud-based providers, it’s likely to encounter the same minimal level of cooperation from those providers and their home nations.

However, Manuel has an equally uncomfortable situation. His epiphany is that he has no idea where all his data are stored and by the very nature of the Internet there is likely no way to remove it from the locations that can work to his disadvantage. His proximity and flexibility advantages are to an unknown extent countered by his lack of control over those environments.

Friday, March 5, 2010

Going Rogue: A Cloud Fable

The leadership within a certain division, let’s call it the BizDev division, of a fortune 500 company is entrepreneurial by nature and thus results oriented. BizDev is impatient with the red-tape required to get anything done. Capital expenditures are difficult to get approved and initiatives are equally difficult to resource, initiate, and complete. The leadership’s perception of their peers within the divisions from which they consume services is that because of resource constraints they have a basic reflex towards being obstructionists.

Being entrepreneurial, innovation comes naturally to the BizDev leadership team. They contemplated bypassing the corporate infrastructure and protocol for the greater good of their group. The BizDev team looked to the cloud for solutions to their bureaucratic woes. Using operational instead of capital expenditures for outsourced cloud services made sense to them. Implementing cloud solutions is quick, the process is simple, and if they stay mainstream, the vendors will be known quantities with effective and reliable track records. Why trudge the internal landscape when you can subscribe to existing solutions that satisfy the 80-20 rule. BizDev decided to roll the dice; guessing that the monolith didn’t have the appetite to stop them

BizDev was going rogue.

BizDev specializes in content management and as such has significant storage requirements. They looked to Infrastructure as a Service (IaaS) providers to solve their problem. Storing content at an IaaS provides them with the ability to increase and decrease their requirements in a self-service kind of way with virtually no red-tape. The BizDev team saw this as a no brainer and after a relatively quick selection process contracted with a vendor and began using their services shortly thereafter. The IT division with the company got wind of the cloud deal and although outwardly condemns the move doesn’t fight it and is actually relieved to get BizDev off their back.

BizDev’s gambit paid off. Over the course of several years, BizDev’s use of content management providers became more sophisticated by using specific vendors based on cost and functionality related to specific media. They also expanded its use of the cloud by encouraging its thought leaders to blog using the blog space of their choice and by actively using as many of the available social networking sites as possible.

Word of BizDev’s success in the cloud spread throughout the organization to other divisions. Many using the cloud for their own purposes; marketing, software development, legal services, ERP, CRM; you name the cloud service offering and it was being used somewhere within the company.

Although there was apprehension about the lack of governance over the use of cloud providers and the policies around their use, there was no appetite to take on a task that most anticipated to be a politically charged and multi-year project involving many resources. The use of cloud services was widespread and for the most was providing huge benefits with regards to speed to market, flexibility, and cost reduction. The cloud was considered a boon to efficiency that dramatically improved the organization.

Until the indictment.

The SEC came down hard on the company for alleged indiscretions related to business deals, profit forecasts, and accounting schemes. In addition to emails related to the executives of the organization, the opposing counsel is asking for all the marketing material related to BizDev and other business units to determine if fraudulent claims have been made about product effectiveness.

The company’s lack of governance over cloud resources is now making the process of providing discoverable resources to opposing counsel within federally mandated time frames untenable.

Where does all the marketing content reside? How do they capture what’s required and filter out what isn’t? Are the cloud providers prepared (and willing) to provide what’s requested by the opposing counsel? What are their retention policies and do they follow federally mandated regulations?

Have the respective business units maintained compliance with corporate policies related to external marketing? Is there a controlled and comprehensive set of messaging that’s contained in blogs and articles written by thought leaders? Are there any incriminating status messages posted by overzealous thought leaders in their favorite social networking sites? How does the company search for, retrieve, and provide social networking status messages for key individuals?

Although the cloud is becoming, and will continue to be, a direction for cheap and scalable solutions to common business problems, without governance business entities potentially put themselves in tenuous positions. The cloud is just another option like the myriad of other options for conducting business operations. And as such, cloud vendors should fall under the governance of corporate policies like any other vendor.

Saturday, February 20, 2010

Which Cloud is right for you?

Deciding which cloud is right for your software engineering team isn’t as difficult as you would think. The big players getting all the attention in the media today are Google, Amazon, Microsoft, and Force.com. For most software engineering teams, however, none of them is the best option from a development perspective.

The big four are geared more to production environments than development. Although most of them provide the ability to extend their environments through local IDEs like Visual Studio and Eclipse, that’s hardly enough for a development team. They don’t address things like integration, testing, and demo environments.

A good example of the inadequacy of the big four as software engineering environments is the fact that Amazon deletes VMs when they are shut down. That’s clearly indicative of a public facing focus where shutting down a VM equates with shutting the doors on a business. Thus, Amazon, as well as Google, Microsoft and Force.com, are not options when moving “development” to the cloud

A more appropriate approach for cloud development environments are Infrastructure as a Service (IaaS) providers geared specifically towards development teams. They are lesser known but I assure you they are out there.

Under this structure, environments can be spooled up on demand. Costs are controlled by having VMs running during working hours and shut off during off hours. In addition to conventional application development, teams need temporary plug and play VM infrastructures to be used for an finite period of time to develop POCs, demo systems, troubleshooting specific issues, and many other one-off situations.

My firm’s application support service offering is an ideal function that is conducive for IaaS providers. Our development team has 40 VMs consisting of various client applications and project infrastructures. Putting all of them in the cloud with the ability to start and shutdown on demand based on client requests provides our team with the ability to focus on our core competency (software development) and offload those functions that aren’t our core competency (infrastructure engineering and management) to cloud providers.

IaaS providers aren’t for everyone either. They are ideal for organizations with lots of infrastructure needs. They are typically not suitable for public facing production environments nor are they ideal of a single demo VM or similarly smaller infrastructure.

In general, here are the rules of thumb:

  1. Big Four: If you have a public facing product with a high hit rate, then one of the big four providers is likely to be the right choice.
  2. Infrastructure as a Service (IaaS):
    1. If  you are a software development team with many sophisticated infrastructures to support many projects and would like to offload the infrastructure supporting the SDLC, then an IaaS provide is the right choice.
    2. If you are a startup product company and do not want to invest in infrastructure engineering resources, then an IaaS provider is worth considering.
  3. Internal Resources: If you are a small application development team with few environments, your internal resources or that of your consulting vendor will be the most efficient solution.

Saturday, February 6, 2010

Evolution of Software Development and Cloud Computing

Cloud computing is a relatively new term for what has been around for a decade – leveraging virtual environments within the firewall and beyond it. Many software engineering teams have been in the cloud since its inception and will likely be drivers for its adoption going forward.

HOW DID WE GET HERE?

Prior to virtual environments, the infrastructure to support development efforts was physical. Thus, software engineers had their development tools loaded on their respective workstations and integration, test, and production environments were made up of servers on the network. Coupled with the expense of physical hardware for every required infrastructure environment, installing development tools and custom applications on workstations and servers often had a corrupting effect on those machines.

If you subscribe to the definition of a cloud environment as being any that is virtual – even those behind the firewall – then the first iteration of a private cloud for software development came with the emergence of using virtual environments for development workstations. This had a mammoth effect on development efficiencies and IT governance.

As an example, in 2001, I was hired as a consultant to help a client develop the next generation of their commercial e-Discovery product. The suite of applications were complex with many moving parts and technologies. It took the average developer 1-2 weeks to get their development environment up and running. Because of the expense associated with physical infrastructures there were limited environments to promote code, and continuous integration wasn’t even a dream.

I was hired back again in 2005 for yet another next generation development effort. This time, the developer workstations were based off of a base VM. Getting up in running took hours instead of weeks. The fact that developer tools were loaded on the VM instead of the host machine was a dramatic improvement for IT. At last, a software engineer’s machine was like everyone else's in the organization.

In addition to developer workstations, the integration and test environments were also virtualized. This allowed the release engineers to revert environments to their base snapshots in preparation for new releases. When we needed to branch testing, we would spool up another virtual environment. Our only limitation was the hardware on which those environments were deployed.

WHERE ARE WE GOING?

The last several years have seen virtual environments that exist beyond the firewall. The current ‘big four’ front runners are Google App Engine, Amazon Web Services (AWS), Force.com by SalesForce, and Microsoft’s Azure platform. These providers offer scalability for large volume web-based applications with built-in clustering and load balancing.

However, the big four aren’t inherently development environments with sophisticated processes for building and testing applications and maintaining source code control, build scripts, and other requirements of complex projects. For these types of projects, you would use Infrastructure as a Service (IaaS) providers. IaaS providers allow teams to customize virtual server and workstation configurations in a plug and play fashion that’s conducive to on-going development.

A complete cloud solution would be to combine an IaaS provider with one of the big four. The IaaS provider would be the infrastructure for the development and test environments. Releases would be published from the IaaS provider to one of the big four for your high traffic web app.

Software Engineering teams interested in using cloud providers need to decide which flavor of Internet-based cloud infrastructure is appropriate for them; extending a developers workspace and “being” the developers workspace.

Each of the big four allow developers to use existing IDEs to extend their development environment. Here are the major IDEs and the providers support.

  • Visual Studio: Azure
  • Eclipse: Google App Engine, AWS, Force.com, Azure
  • NetBeans: Google App Engine, AWS

Each of the big four support one or more programming languages. This list below is not comprehensive but provides the major languages:

  • Azure: C#, Java, PHP, Python
  • Google App Engine: Java, Python
  • AWS: C++, C#, Java, Python
  • Force.com: Active Script, Apex, Visual Force

As you can see above, which platform you choose is dependent on your internal expertise. Force.com is the platform that does not provide support for the major languages. However, it’s a more rapid development environment than the others – somewhat like a 4 GL – and I anticipate that it will be a player for the long haul.

The appropriate infrastructure associated with the decision to move to a cloud provider is a decision to think through carefully. The various configurations are virtually unlimited, e.g. have the entirety of your virtual environments either behind or in front of the firewall, split environments between behind and in front of the firewall, and deciding which, if any, segments of the infrastructure are public.

The adoption of cloud providers is a non-trivial decision as is the design of the appropriate infrastructure. However, as a manager, software engineer, and consultant, the cloud is likely to be a catalyst for a trend that offers developers an opportunity to think more about development and less about infrastructure. Which is as it should be.

Web Analytics