Virtualization is a means to disconnect the running application from the underlying hardware. Its an enabling technology used by all the cloud computing
...etc
We select servers based upon cost of CPU to Ram to Power and Space ratio. And that changes over time. Virtualization also gives us the freedom to select hardware at todays prices, and not feel locked into one vendor.
So I buy some of these arguments, but by no means all. If you believe folks like James Hamilton, datacenter buildout at scale means you can reevaluate almost all your hardware assumptions without adding significant cost to the buildout, and with the potential of oom-sized operational benefits. It's hard for me to believe that current enterprise-focused hardware (eg blade servers, big sun or ibm boxes, etc) are what you'd choose.
Virtualization (assumed to mean os virtualization) clearly can ease operational cost for a legacy application stack, but it is a pretty blunt instrument to apply to things like workload management - you're essentially maximizing the size of your movable state, introducing an incredibly coarse-grained locking infrastructure, and adding considerable management complexity (eg blowing out your spanning trees just to preserve the mac you had, because the os doesn't expect that to change in a single tick) in exchange for preserving your current architecture. That's more than just a 5% runtime penalty.
I'm wondering if at some point "the cloud" bifurcates into a space optimized for serving legacy stacks, and another optimized for more modern designs? If the latter, I'm wondering if the dominant abstraction is lower-level (think context switching, cache-line management, etc), or higher-level (think whatever griddy APIs you prefer).
Or maybe, if the vibe I'm getting here is on-target, advances in the lower and higher-level approaches will have to be complementary. ie building an arbitrarily complex software stack is "free" as long as it can a) then be stamped out at massive scale, and b) address the levels of granularity in workload management, and ease-of-consumption issues which hardware advances alone cannot.
Krishna, as I'm totally ignorant on multi-core designs - is the primary power-saving benefit purely a packaging issue, eg sharing a power line amongst the cores vs adding a whole additional "card", or is it a more integrated thing where there are dynamic runtime benefits derived from shared componentry? ie if I have an 8-core cpu in a huge datacenter buildout, is it equivalent to expose that thing as if it were 8 separate "computers" vs one 8-cpu "computer"?
-d
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google
Groups "Cloud Computing" group.
To post to this group, send email to cloud-computing@googlegroups.com
To unsubscribe from this group, send email to
cloud-computing-unsubscribe@googlegroups.com
To post job listing, send email to jobs@cloudjobs.net (position title, employer and location in subject, description in message body) or visit http://www.cloudjobs.net
To submit your resume for cloud computing job bank, send it to resume@cloudjobs.net.
For more options, visit this group at
http://groups.google.ca/group/cloud-computing?hl=en?hl=en
Posting guidelines:
http://groups.google.ca/group/cloud-computing/web/frequently-asked-qu...
This group posts are licensed under a Creative Commons Attribution-Share Alike 3.0 United States License http://creativecommons.org/licenses/by-sa/3.0/us/
Group Members Meet up Calendar - http://groups.google.ca/group/cloud-computing/web/meet-up-calendar
-~----------~----~----~----~------~----~------~--~---
No comments:
Post a Comment