the marvels of open source...
Microsft should have tested this better of course, but ultimately the flaw is in the open source plugin provided by a third party (who no doubt had to provide detailed test reports before MS would even consider integrating it into their product).
And of course the author isn't without blame either, should not have published what amounts to username+password data to a cloud based service. Even if there'd been no bug that exposed it to the world he'd have been at risk of a security breach at the service where his data is now stored on a drive, unencrypted, readable for anyone with shell or physical access to the machine.
should not have published what amounts to username+password data to a cloud based service
Agreed, I would not post any such key to the cloud. I just don't trust it enough.
While I agree with the causes and blame, why does AWS/EC2 allow the racking up of insane bills? I realise that it's meant to be dynamic allowing the system to cope with surges, but it all smells a bit to me. The author was, in his own words, an experienced webdev. It begs the question of how many other less experienced users out there have been hit by investing in cloud hosting which seem to have "no limits". I've always been a bit sceptical about cloud hosting and pricing, but I have considered it a few times. Heh heh, I don't think I'll be taking the plunge anytime soon,at least not until I understand it better. Which may be never :)
Amazon will send you warnings when billing limits are exceeded (or approaching) but you do have to enable those to be sent to you and set the trigger amounts.
What's excessive after all? I worked for a company using AWS services extensively. We'd rack up gigabytes of traffic a day. If they shut us down, or sent us warnings, every say 50MB we'd have to contact them to unblock those keys every few minutes.
For someone else that 50MB might be more than they generate in a month...
There's nothing wrong with AWS. Every account I've heard of people being hit like this is when those people have through oversight of their own exposed access keys to the world through insecure channels (most by uploading them to some public git repository) and not having those keys properly limited beforehand.
Which just goes to show you should be careful uploading anything to a public repository. And that you should be especially careful uploading something to a public repository when you're not totally confident that you know exactly what the consequences can be of people getting access to it.
Am certainly in agreement. I am just used to having support that will help you resolve such issues quickly. If they agree he has been hit, why not help him shut the whole thing down? Ignore this statement if they have this in their ToS, that you have to fix it yourself. Perhaps it happens too often ;)
Not saying anything "wrong" with AWS/EC2 itself, just that it seems to me that "support" didn't protect their customer after being made aware of the issue (he obviously ballsed-up) - perhaps I've got the "wrong" end of the stick.
they no doubt get similar stories from a lot of people who ran up the bills themselves and now demand their money back.
If I were them I'd not instantly pay back but first thoroughly investigate what's going on, go through logs to see where that traffic was coming from to see if there are patterns that are recognisable.
That can take a few days.