I’ve just completed the first phase of my wide reaching industry study about cloud knowledge, and the results are exactly as we expected.  The more pork you eat (especially when it is slow cooked in a smoker or bbq), the more cloudy you become.

This chart says it all…


This week I spent a few days at RailsConf in Baltimore re-connecting with my developer roots, and it gave me a fresh perspective on what developers really think about cloud.


Although I’ve spent much of my time focused on the application layer in the past decade, it has been at the architecture and integration level for the most part – especially recently.  I’ve both personally coded (in PHP, don’t hate me) or led development teams that have built a number of large web and enterprise apps (in various languages) during that time.  I have attended all sorts of developer conferences and events throughout the years, and remember stacks of punch cards in school (showing my age).

Over the past 3 or so years that I’ve been heads down in the fast moving cloud world, nearly all of the developers I’ve dealt with directly (meaning not proxied to me by IT folks) were working on cloud projects of some kind and as such, were bullish on ‘cloud’.  It seems this vacuum has really skewed [my|our|the industry] perception of developers.

Cloud industry perception

Go to any cloud conference or event and everybody is talking about developers and how much they want to use the cloud.  It’s like they have taken on Ballmer’s personality… developers developers developers – I’ve even said it myself.  Lots of companies are building great tools for and around the developer community and entire movements have been started around them.  To the cloud industry, it seems to be mainly about the developers right now (that is changing though).  There are a few languages and frameworks that have gotten the most attention in the cloud world, and Rails is one of the most popular – so you would think Rails developers are in love with the cloud.


Now let’s switch over to my experience at railsconf.  Out of four days of talks/sessions, only two of them had ‘cloud’ in the title, and they were both in the ‘sponsored’ track (surprise surprise…).  Out of all the organic (non-sponsored) talks I attended, the only time I even HEARD the word cloud was in a talk given by one of the Heroku guys (again, surprise surprise…).  I spoke to a bunch of developers during the breaks and at lunch about their opinions on cloud, and to say it was ‘meh’ would be overstating it.

Cloud hate?

It wasn’t cloud hate, it was just a general perspective it’s all about the code and doing cool things.  One attendee told me he thought cloud was great because it lets him spend more time coding and less time worrying about where the app is running – but that he really didn’t care about all the cloud technology.  That’s a good summary of the vibe I got this week.


Should we (the collective ‘we’ making up the industry) care that the best and brightest developers in the hottest space don’t seem to care about all the cool stuff we’re building for them?  Probably not.  They’re using what we’re building, they like it, and they don’t want to go back to the way it used to be anytime soon.

Having said that, the reality that they see the primary value of cloud as a way to let them spend more time coding, and not much else, is powerful and we shouldn’t forget it.


Follow Scott Sanchez on twitter: http://twitter.com/scottsanchez

Notice: This article was originally posted at http://www.CloudNod.com by Scott Sanchez and is his personal opinion.  Copyright 2011 Scott Sanchez, All Rights Reserved.


Concept: Using AWS IAM to protect your own API’s

Let’s say, hypothetically, that you are considering building a cloud-based service and had come to that fork in the road where you had to think about how to authenticate users to your API’s.

As I was thinking about that problem, it struck me that potentially you could use the new(ish) identity and access management services from AWS.  Create users, set groups and permissions, authenticate them against IAM as an identity provider of sorts.  Of course after I read the FAQ where it asked if you can use it on 3rd party apps, the answer was “not yet”.

But I think you can, today.


  1. New user of my API (“user”) signs up in my app/console/web page/etc
  2. I create a new user for them in my app and credentials in IAM
  3. I add them to an IAM group for something like GetObject permissions on some random S3 bucket and stick a tiny file in there
  4. When the user authenticates to my API using their IAM credentials (access ID/secret key), I make an auth or GetObject request on their behalf behind the scenes on the bucket they have permissions to.
  5. If the S3 request is successful, let them in to use my API

I plan on hacking this together the first chance I get, but if someone else gets around to it first, please let me know in the comments here on cloudnod or on twitter (@scottsanchez).


Follow Scott Sanchez on twitter: http://twitter.com/scottsanchez

Notice: This article was originally posted at http://www.CloudNod.com by Scott Sanchez and is his personal opinion.  Copyright 2011 Scott Sanchez, All Rights Reserved.


I’ve been having a conversation on twitter with @reillyusa this morning about how a “cloud of clouds” could help prevent a single point of failure like we saw take down so many sites yesterday due to issues at AWS.  One availability zone or region goes down at AWS?  No problem, as service levels started to degrade your apps/data/state/etc was moved to another zone or to Rackspace or someone else.  The engine would reduce the cost of having HA because it would make smart decisions about where, how and why to move workloads, and could even have a bunch of hot/warm instances running that are transparently shared across users to reduce the cost of having your own ‘dedicated’ instances at other clouds waiting for you to fail.

As I blogged yesterday, many of the higher profile sites that were down yesterday simply chose to ignore the options available to them today and instead chose to point the finger at amazon instead of looking in the mirror.  Lots of reasons why they prefer finger to mirror… cost, time, skill, or all of the above.

Let’s assume for a moment that someone very smart was able to build such a ‘cloud of clouds’ engine.  It would need to allow developers to address what appeared to be a single VM instance, but was, in fact, a bunch of VM’s in some state (hot/warm/cold) across multiple clouds- with all of the replication, load balancing and failover magic necessary to make this work.  People like Enstratus and RightScale already do some of this within a single cloud platform, so it’s not a stretch to think they could make it work across multiple clouds.  Of course there is the small (huge) problem of the major inconsistencies between offerings among the clouds — take EBS for example, but there are plenty of others.

So with our cloud imaginations running wild, let’s assume this system exists, and it’s is a dream come true for people that need the DIY-ness of IaaS because their apps that just aren’t well suited for static PaaS environments.  Any reasonably cloudy person knows this is really just “enhanced IaaS”, but I feel like it would get lumped in the same bucket as many of the PaaS offerings out there.  I can hear people saying now “yeah, but it’s really a platform” and me responding “yes, a platform for using IaaS”.

Certainly having open standards and consistent capabilities among ‘commodity’ IaaS would make building this type of ‘cloud of clouds’ easier, but that doesn’t mean people aren’t trying as I type this.  It would reduce the “we’re down because of amazon” finger pointing that occurs because (for whatever reason) people opted not to deploy in multiple clouds.


Follow Scott Sanchez on twitter: http://twitter.com/scottsanchez

Notice: This article was originally posted at http://www.CloudNod.com by Scott Sanchez and is his personal opinion.  Copyright 2011 Scott Sanchez, All Rights Reserved.


All your eggs in one availability zone? Tsk, Tsk…

So this morning the big news is that AWS is having issues affecting customers in US-EAST-1.  So far I’ve seen 4sq, reddit, godaddy, quora and many others on the “is down” list.  What always surprises me when this happens is that people point fingers at AWS, and I always shake my head.

If your business relies on a website to be up, why do you allow a failure in a single availability zone to shut down your business?  There are so many tools out there at this point to simplify deployment, scaling and resiliency across multiple availability zones or even across multiple cloud providers – frankly, you have no excuse.  Quora I can maybe excuse at this point … still fairly new, still working on features and functionality (and user retention, but that’s a discussion for another post), but reddit or 4sq?  Really?

Diversify yourself across multiple availability zones, and even better, across multiple providers.  You’ll sleep better at night and will reduce the chance of showing up on the “is down” list with angry users to answer to.


Follow Scott Sanchez on twitter: http://twitter.com/scottsanchez

Notice: This article was originally posted at http://www.CloudNod.com by Scott Sanchez and is his personal opinion.  Copyright 2011 Scott Sanchez, All Rights Reserved.


‘Portability’ has jumped the cloud shark

For years already we’ve heard people moan about cloud lock in, and how things should be portable between clouds.  Today, most of the major cloud management platforms and stacks support multiple cloud technologies (some very good ones are even open source) and folks like CloudSwitch and BitNami can wrap your images so they can be deployed where you need them.  But the word we keep hearing is portability.

Last week the portability buzz was about VMware’s Cloud Foundry – which is a great step in the right direction and looks like it has the potential to have a real impact in the market.  Today there is portability buzz about the “solution pack” for PHP  launched by RightScale in partnership with Zend.  Add these two recent additions to the growing list of ‘portability’ focused offerings, and I see a positive trend forming.

A year ago the roar from vendors was “use our cloud technology” — now I can see a clear movement headed towards “use any cloud technology”, and open code bases and API’s are starting to become part of these solutions.  In order to make the “use any” real, the messaging has also moved up the stack from pure infrastructure to where many of us have been saying it should have been for the past 3 years – applications.  Some good news is that there are finally enough people with their eyes on actual standards for cloud portability that in another 3 years we might have something to work with. :)

My prediction is that just like late 2009/early 2010 when almost every tech vendor cloudwashed their product messaging, 2011 will be the year that those same vendors start to portabilitywash their offerings.  If you want to future-proof your portability, demand open API’s,  open standards and ideally (at least for the pieces of code that connect ‘other’ clouds) – open source.

Do you agree or disagree with me?  Would love to hear it in the comments.


Follow Scott Sanchez on twitter: http://twitter.com/scottsanchez

Notice: This article was originally posted at http://www.CloudNod.com by Scott Sanchez and is his personal opinion.  Copyright 2011 Scott Sanchez, All Rights Reserved.


Another observational blog post as I try to catch up from not blogging for 3 months.  This is what happens I spend a bunch of time on the street helping customers fulfill their cloudy dreams… :)

One of the things I’ve been trying to evangelize is that for the first time since the dawn of IT, enterprise cloud projects give us a chance to realign IT with the business.  Despite all of the claims over the years of business value, I think cloud really is the change opportunity the business has been looking for, although they won’t call it cloud.

For decades IT costs have grown, and IT has enabled those that use it to become more efficient and grow to levels they never could have without it.  But if you think way back to the first huge ‘computers’ that some of the biggest companies purchased, it wasn’t so they could have email, it was so they could have an electronic general ledger or customer database that could help them grow to new levels or provide measurably better service to existing customers.   In other words, “corporate IT” was pulled in to existence by the business because there was a real, quantifiable need.

Sometime between the business going “wow we can now send out 3x as many invoices in a month thanks to this room and we can really grow the business” and the mid 90′s, something drastically changed.  The business stopped looking at IT as a core enabler and started looking at them as a cost center.  Yes, IT delivered value, and yes, IT was now core to the business – but only because so much of the business was run on and through a computer.

A few years ago I was involved in a project at a fortune 500 to really look at IT spend and try to equate that to REAL business value – beyond just being ‘operational’ – places where IT was really enabling the business to grow top or bottom line.  As you might imagine, it was a very short list.

So where am I going with this?  Most private cloud projects I’ve run in to are being driven by IT under the guise of efficiency, agility, cost savings, etc.  Can those actually come true?  Sure, and sometimes they really do.  In reality, unless the business users are telling you what this “private cloud” should do (and guess what, they won’t be calling it ‘private cloud’), and how it will really help them grow the business in ways that are meaningful, you’re not making a difference.

Bottom line: stop designing the vision, scope and capabilities of your private cloud in the IT vacuum.  Put the word ‘cloud’ away, get your walking shoes on, and go spend a couple of weeks talking to key business users in your company about what they REALLY want from IT.  Help them dream a little about what they would LOVE to see because it would blow up their division/project/forecasts/etc.  I guarantee that what you come back with won’t seem like the ‘private cloud’ you originally thought you should build, and instead, maybe you build a room-sized computer that will truly enable the business to take things to the next level.

Do you agree or disagree with me?  Would love to hear it in the comments.


Follow Scott Sanchez on twitter: http://twitter.com/scottsanchez

Notice: This article was originally posted at http://www.CloudNod.com by Scott Sanchez and is his personal opinion.  Copyright 2011 Scott Sanchez, All Rights Reserved.


In the past few weeks I’ve run across a number of people building public clouds that plan on using the highest end hardware possible.  The fastest processors, IO, memory, SSD’s, infiniband, redundant everything, high end SAN hardware, etc.  My reaction every time is… “why???”.

There seems to be a growing concern by some people just entering the public IaaS cloud business that they won’t be able to differentiate themselves on price or features with the AWS’s of the world.  So rather than looking at other ways to get in to the cloud business beyond IaaS or trying to differentiate themselves in IaaS on something like support, SLA’s, transparency, proven and ‘auditable’ regulatory compliance, brand, relationships, value-add, etc they think the solution is faster/better hardware sold at much higher prices (or much lower margin).

What I think they don’t seem to get is that the reason most people are moving to public IaaS is because they don’t care about the hardware.  Any developer worth their weight that’s building for the cloud is building to take advantage of the ability to scale to turn resources on and off as needed.  Any developer building apps to run in a public cloud that’s relying on super high end hardware to get the job done is just plain doing it wrong.

Would it surprise you to learn that all of these companies came from offering high-end colo and managed services?  Talk about blind ambition.  Do you agree or disagree with me?  Would love to hear it in the comments.


Follow Scott Sanchez on twitter: http://twitter.com/scottsanchez

Notice: This article was originally posted at http://www.CloudNod.com by Scott Sanchez and is his personal opinion.  Copyright 2011 Scott Sanchez, All Rights Reserved.


Improving Cloud Adoption Rates through User Experience

As product manager at ScaleUp, one of my top jobs is to make sure our cloud management platform has as much impact as possible at what we call the cloud “point of purchase”.

This is that magical spot where the consumer and provider meet.  It’s where consumers locate, order and manage the resources they need.  It’s the spot where providers manage their users, offer capacity, manage and monitor those resources, charge for them, enforce and apply automation, governance, security and other business rules and ultimately provide a service.  In other words, there’s a lot going on at the point of purchase.

It doesn’t matter if the provider is an enterprise IT department in a private/hybrid cloud or if they are a tiny MSP offering public cloud or anything in between.  It also doesn’t matter if the user is a business IT user sitting in a cubicle farm or if they are a developer in a garage somewhere – the issues are the same often just with different labels.

Here are just a few of the things we consider every time we want to add something to our platform…


    • Is not a cloud expert
    • In general, does not know (or care) about how or why things work
    • Does not have the desire or time to learn a complex system/process
    • Wants a single, integrated platform for their IT resources and activities


    • Needs to support a wide range of use cases and user types
    • Has great technology inside the datacenter, that is their primary focus
    • Wants to offer complex technology in a simple, self-service manner

Since we launched our cloud management platform two months ago, I spend a good amount of time showing people how we can simplify how they provide and consume cloud services.  The response has been fantastic, and the elegant user experience we have created on both sides and in the “point of purchase” is accelerating stalled cloud projects and creating new ones for both providers and consumers. By solving the user anxiety about consuming cloud resources and how they will manage them, enterprises and MSP’s are moving forward with cloud products at an accelerated place.

The moral of the story is that while everyone is so focused on what’s happening inside the datacenter, perhaps the most important missing link to improving cloud adoption rates in 2011 is what lies outside of the datacenter – the user experience.


Follow Scott Sanchez on twitter: http://twitter.com/scottsanchez

Notice: This article was originally posted at http://www.CloudNod.com by Scott Sanchez and is his personal opinion.  Copyright 2011 Scott Sanchez, All Rights Reserved.


For years companies that had to store or process data about EU citizens only wanted to do it inside the EU. In some countries like Germany, the laws can be even tighter and hard to understand, so companies kept their data inside the “Bundesrepublik” to avoid any issues.

The “Safe Harbor” program for data management gains popularity

One of the developments in inter-continental data management that is not new but is gaining popularity with the rise of cloud computing is “Safe Harbor”, a program developed by the US State Department in cooperation with the European Union. Essentially once a US company is certified in Safe Harbor, they are deemed “adequate” by the EU and member nations with regards to storing and processing EU private data.

How does a US company get certified? Well, they just write a letter to the State Dept saying they are compliant, have an adequate privacy policy, and meet the program rules. The State Dept then publishes their name on the web… and viola- certified.

In some countries “Safe Harbor” is not enough

The problem is that countries like Germany have privacy laws like BDSG that are more restrictive/prescriptive than the umbrella EU laws. There are published legal opinions that Safe Harbor does not adequately meet BDSG and that additional steps must be taken to meet the required levels of data protection- but there is no German equivalent of Safe Harbor to give companies assurance of compliance. Hence, it is a huge risk to store data on German citizens outside of the borders of Germany.

Furthermore, the US and US-companies are not known for being champions of data privacy. For a US company looking to do business in the EU, Safe Harbor sounds like a dream come true… just find a “certified” provider here in the US and start your engines and ignore the fine print (and conflicting legal opinions). For the more risk-inclined, this may be acceptable. I haven’t spoken to a single smart CIO who thought accepting this risk was a good idea, and I agree 100%.

How SaaS companies can succeed

Companies building or using Software as a Service offerings will have to address these same concerns and issues around European and country-specific data protection laws. Consider hosting your application or storing sensitive data in a country like Germany where the data protection laws are some of the strongest in the world. This will be a differentiation point for your solution and more customers will be able to use your application for more scenarios. Another option you should consider is to build flexibility in to your data storage and processing sub-systems to allow the customer to host those components in the location of their choice. Make this easier for your customers by partnering with preferred vendors in frequently requested geographies to make deployment and management easier.

Bottom line- Safe Harbor is a nice concept with an implementation that only half addresses the EU problem, and doesn’t even touch the more restrictive laws in countries like Germany. My recommendation is for companies and ISV’s with EU or localized privacy issues to select a provider inside the borders of the EU or the particular member nation so they and their customers can sleep easier at night.


Follow Scott Sanchez on twitter for more ramblings: http://twitter.com/scottsanchez

Notice: This article was originally posted at http://www.ScaleUpCloud.com by Scott Sanchez and is his personal opinion.  Copyright 2010 Scott Sanchez and ScaleUp Technologies, All Rights Reserved.

Get Adobe Flash playerPlugin by wpburn.com wordpress themes