Cloud computing and the big rethink: Part 3

After exploring the role that changing infrastructure is having on the importance of traditional operating systems and virtual servers, it's time to look at the most powerful force in the cloud computing galaxy: the developer.

In the second part of this series , I took a look at how cloud computing and virtualization will drive homogenization of data center infrastructure over time, and how that is a contributing factor to the adoption of "just enough" systems software. That, in turn, will signal the beginning of the end for the traditional operating system, and in turn, the virtual server.

However, this change is not simply being driven by infrastructure. There is a much more powerful force at work here as well--a force that is emboldened by the software-centric aspects of the cloud computing model. That force is the software developer.

Let me explain. Almost 15 years ago, I went to work for a start-up that was trying to change the way distributed software applications were developed forever. The company was Forte Software, since acquired by Sun (itself soon to be acquired by Oracle), and its CTO, Paul Butterworth, and his team were true visionaries when it came to service-oriented software development (pre-"SOA"), event-driven systems, and business process automation.

What I remember most about Forte's flagship product, a fourth-generation language programming environment and distributed systems platform, was the developer experience:

  • Write and test your application on a single machine, naming specific instances of objects that would act as services for the rest of the application.

  • Once the application executed satisfactorily on one system, use a GUI to drag the named instances to a map of the servers on your network, and push a single button to push the bits, execute the various services, and test the application.

  • Once the application tested satisfactorily, create a permanent partitioning map of the application, and push a single button to distribute the code, generate and compile C++ from the 4GL if needed, and run the application.

This experience was amazingly productive. The only thing it could have used was automation of the partitioning step (with runtime determination of scale, etc.), and the ability to get capacity for the application dynamically from a shared pool. (The latter was technically possible if you used a single Forte environment to run all of the applications that would share the pool, but there still would be no automation of operations.)

I have spent the last 10 years trying to re-create that experience. I also believe most distributed systems developers (Web or otherwise) are looking for the same. This is why I am so passionate about cloud computing, and why I think developers--or, perhaps more to the point, solutions architects--will gain significant decision making power over future IT operations.

I look at it this way: if an end user is looking for an IT service, such as customer relationship management, a custom Web application, or even a lot of servers and storage for an open-source data processing framework, there is almost always something that takes the knowledge and skills of someone who can create, compose, integrate, or configure software systems to meet those needs.

Furthermore, there remains a lot of reliance by nontechnical professionals on their technical counterparts to determine how computing can solve a particular problem. For the most part, in most corporate and public sector settings, the in-house IT department has traditionally been the only choice for any large-scale computing need.

Until recently, if a business unit hired a technologist to look for alternatives to internal IT, the costs of any other "IT-as-a-service" offering (outsourcing, service bureaus, etc.) was extremely expensive and would immediately have to be rationalized against internal IT--usually to the detriment of the alternative. On top of that, all of those alternatives required long-term commitments, so "trying things out" wasn't really an option.

The economics of the cloud change things dramatically. Now the cost of those services are cheap, can be born for very short periods of time, and can all be put on a credit card and expensed. A business unit can go a long way to proving the economic advantages of a cloud-based alternative to internal IT before their budget is significantly impacted.

Developers are increasingly choosing alternative operations models to internal IT, and will continue to do so while the opportunity is there. Internal IT ultimately has to choose between competing with public clouds, providing services that embrace them, or both.

(There are often reasons why internal IT can and should provide alternatives to public cloud computing services. See just about the entire debate over the validity of private clouds.)

So, how does the cloud accommodate and attract software developers? I believe the key will be the development experience itself; key elements like productivity, flexibility, types and strength of services, and so on will be critical to cloud providers.

We need more development tools that are cloud focused (or cloud extensions to the ones we have). We need more of an ecosystem around Ruby on Rails and Java, currently the two most successful open development languages in the cloud, or innovative new approaches to cloud development. We need to tighten up the development and testing experience of PaaS options like Google App Engine, making things "flow" as seamlessly as possible.

We need more IaaS providers to think like Amazon Web Services. We always hold up AWS as the shining light of Infrastructure as a Service, but the truth is that they are actually a cloud platform that happens to have compute and storage services in their catalog. How much more powerful is AWS with other developer-focused services, such as DevPay, Simple Queue Service, and Elastic Map Reduce? This attracts developers, which in turn attracts CPU/hrs and GB/hrs.

How does all of this affect the virtual server and operating system, the topic of this series ? Well, if the application developer is getting more services directly from the development platform, what is the need for a bevy of advanced services in the operating system? And if that platform is capable of hiding the infrastructure used to distribute application components--or even hide the fact that the application is distributed altogether--then why use something that represents a piece of infrastructure to package the bits?

Next in the series, I want to consider the role of the business users themselves in rethinking enterprise architectures. In the meantime, you can check out part 1 of this series about how cloud computing will change the way we deliver distributed applications and services ; and part 2 about how server virtualization is evolving .

About the author

    James Urquhart is a field technologist with almost 20 years of experience in distributed-systems development and deployment, focusing on service-oriented architectures, cloud computing, and virtualization. James is a market strategist for cloud computing at Cisco Systems and an adviser to EnStratus, though the opinions expressed here are strictly his own. He is a member of the CNET Blog Network and is not an employee of CNET.

     

    Join the discussion

    Conversation powered by Livefyre

    Don't Miss
    Hot Products
    Trending on CNET

    HOT ON CNET

    Looking for an affordable tablet?

    CNET rounds up high-quality tablets that won't break your wallet.