The holy grail of software delivery is to commit to your source repository and watch the new version of your code automatically deployed and running to your beta or QA stack. If you're deploying to a PaaS, such as Heroku, you can already do this today. However, for the rest of us that deploy to a private data center, a private cloud, or a public cloud with servers in a VPC we're left out in the cold.

I saw this article on HackerNews this morning -
Continuous Delivery with a Real Deployment Platform

I agree'd with the CEO's view point, but it was finding out the Andreseen Horowitz is funding them that really piqued my interested.

What are your thoughts?

Ignorance is bliss...

We were doing continuous integration back in the 1990's where CVS was our source repository. Code that was released into production would automatically be checked out by our make files when system tests were run. Errors in a module would cause it to be pushed back into a development state and dropped back on the "desk" of the developer with a full report of the errors. This rarely happened if the developer had done proper unit tests. If they didn't, they would hear from me, LOUDLY! :-) Of course, sometimes a module that had significant changes would pass unit tests, but cause side-effects that would result in a system regression failure. I would not yell at them in such a case, but work with them to improve their unit tests to cover such conditions.

Some time later, we moved to ClearCase for version control, and incorporated the same methods we used with CVS. Because our make files were very modular, only a couple needed to be altered to work with ClearCase instead of CVS. This was before SVN and LONG before git.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.