Horizontal Decoupling of Cloud Orchestration for Stabilizing Cloud Operation and Maintenance,decoupling





In a plain and understandable desire for achieving economies of scale, a cloud orchestration software system should be capable of managing a huge farm of hardware servers. However unfortunately, even using most advanced software configuration and management tools, the field has trial-and-error learned a common sense that the distribution scale of a cloud orchestrator mustn’t be too large. For instance, VMware, probably among the most experienced players in the trade, stipulates a rule-of-thumb upper bound for its successful orchestrator vRealize: No more than 1,000 servers per vRealize. Scaling-up beyond that level, cloud operation and maintenance would become unstable plus incur a sharp increasement in operation and maintenance cost. Recent achievements in hyper efficient CPU virtualization by Docker have seminally ignited additional orders-of-magnitude explosions in the number of micro-servicing CPUs, adding further troubles to worsening the scalability problem in cloud orchestration. Current poor scalability status quo in cloud orchestration means that today’s clouds are in small isolated scatters and thus cannot fully tap intended cloud potentials from economies of scale.

The essential problem behind poor scalability in cloud orchestration is that all cloud orchestrators, from commercial offerings or from open source projects, unanimously and understandably evolve from a traditional horizontally tight coupled architecture. A horizontally tight coupled orchestrator is a bunch of software components which are heavily knowledge interwoven. Knowledge interwoven components mean that these components know the existence, roles and duties of one another right at their birthday of system installation, throughout their remainder entire lifecycles afterwards. When the scale gets large, some queues of events and messages will inevitably become long; writelock mechanisms for consistency protection and CoW DB accesses will also aggregate momentum to slow down responsiveness; and occasional unfortunate popup of failure, even merely in a benign timeout sense, occurring at one point would highly likely pull down other knowledge interwoven parts. As a matter of fact, all cloud operators or service providers heavily depend on human based 7x24 on-guard operation maintenance teams, playing roles of firefighters!

We present Network Virtualization Infrastructure (NVI) technology to horizontally decouple cloud orchestration. The NVI technology minimizes the size of each cloud orchestration region down to over one single hardware server, e.g., in the formulation of OpenStack all-in-one installation. An orchestrator installed over one server of course has absolutely no knowledge whatsoever about any other orchestrator managing another server. While having obviously maximized stability for cloud operation and maintenance, NVI pooled overlay cloud resources remain to have unbound scalability. This is because NVI can trans-orchestrator connect overlay nodes in user mode only upon one node initiating communication to another (think of http connection!). Moreover, NVI can connect heterogeneous virtual CPUs and cloud orchestrators, e.g., heavy-duty hypervisor VMs and lightweight micro-servicing Docker containers, which are managed by separate and independent orchestrators, e.g., OpenStack and/or Kubernetes, and can also transparently link different cloud service providers.

The key enabler for any two not-knowing-one-another orchestrators to connect in user mode their respectively managed overlay nodes is a novel OpenFlow formulation for forwarding trans-orchestrator underlay packets. This new SDN formulation revolutionarily eliminates any need of underlay packet encapsulation, such as VLAN, VXLAN, VPN, MPLS, GRE, NVGRE, LISP, STT, Geneve, you enumerate the packet encapsulation formulations! Having avoided packet encapsulation, there is of course no need for the involving orchestrators to know one another in host mode, neither in the system installation time nor in the remainder entire lifecycles afterwards. This key enabler achieves complete horizontal decoupling of cloud orchestration. With connection taking place thoroughly in user mode, cloud deployment, operation, maintenance, and system upgrading, etc., can become 100% automated.

With the problem-solving architected NVI realization of truly scalable cloud orchestration, DaoliCloud attempts to contribute to the cloud industry a new production line: “Build, ship & low-cost operate any cloud any scale”, as a new frontier to work with, extending from the great inspiration of “Build, ship & run any app anywhere” from Docker.

URL: www.daolicloud.com exposits, in "for dummies" simplicity, a near-product-quality prototype of cloud orchestrator. We cordially invite the reader to register an account for trial use. Some trial users might come to an appreciation that the new architecture for cloud orchestration can provide a number of never-seen-before useful cloud properties which are probably only possible from the new architectural innovation in cloud orchestration and in network virtualization.



相关内容