Building distributed clouds of all shapes and sizes

Building distributed clouds of all shapes and sizes

Clouds are no longer just an amorphous blob where you can request compute and storage resources for a monthly fee (think AWS). They’re quickly evolving to satisfy a myriad of new business and operational needs. They are becoming more specialized, and it’s no longer a “one size fits all” world. And similar to the changes we’re seeing in scale up/out, the life cycle of clouds is changing dramatically. We’ve become accustomed to spooling up resources for new development projects and then relinquishing them a few weeks or months later. But now clouds my have a life span of a few days, or maybe even hours or minutes. Seconds? Not out of the question.

One thing these clouds all have in common is that they are becoming more distributed. Whether they’re designed for global ERP applications or mobile edge computing, they typically need to support applications that span multiple sites. This new level of hyper-distribution and granularity requires a different approach for orchestrating cloud resources. But before we jump into what that looks like, let’s take a look at three types of distributed clouds that are becoming more common as we speak – global cloud centers, network edge clouds, mobile edge clouds.

Global Cloud Centers

Service providers and many large enterprises are building large scale cloud centers at strategic locations around the globe. These centers leverage the power and reach of global networks and allow users and customers to move applications to specific locations to optimize operations and reduce latency to public clouds, local operations (e.g., a distributed manufacturing facility), or other legacy resources. In many cases these clouds are designed to be multi-purpose/multi-user to mimic the nature of public clouds. Typical applications located in Global Cloud Centers include application modernization, DevOps automation, Big Data analytics, demos/POCs, virtual training, hosting services, and enterprise applications (e.g., ERP, CRM, payroll, accounting, logistics), etc.

Network Edge Clouds

Central offices are an ideal location for new infrastructure services to support network edge applications. Central Office Re-Architected as Datacenters (CORD) is an emerging model for building reusable cloud infrastructure to support a wide variety. A new specification by opencord.org outlines a reference implementation for transforming central offices into clouds that support everything from Access-as-a-Service to Software-as-a-Service. Traditional access services such as Optical Line Termination (OLT) and Broadband Network Gateway (BNG) are being refactored as software running on commodity servers, white-box switches and merchant silicon I/O blades. In addition to these access services, sophisticated network edge services (NFV, vCPE, EPC, IMS, CDN, virtual mobile core, base band unit virtualization, SD-WAN, etc.) can be delivered on dynamic clouds in CORD sites.

Mobile Edge Clouds

The third, and probably most interesting cloud is emerging at the mobile edge. There are a huge number of mobile applications that are driving new sub-10msec latency requirements. Things like online gaming, VR/AR, edge analytics, sensor data services, Industry 4.0, health/sports services and or course an almost limitless number of new automotive-related services.  With these applications comes a whole new set of business and operational requirements ranging from not only the aforementioned latency, but also support for new device capabilities, improved battery life, increased privacy, improved availability, and the need to reduce backhaul traffic. All of these requirements necessitate the need for clouds that now live in hyper-distributed locations such as cell towers to parking meters, to swarm locations to support hundreds if not thousands of interconnected devices that form an integrated application ecosystem.

The New Orchestration Engine

Building the new generation of hyper-distributed clouds requires a new orchestration engine that delivers a complete set of carrier-grade features:

  • Seamless integration with existing OSS/BSS/Cloud Broker/NFV orchestration systems via open APIs
  • Simultaneous orchestration of cloud services across multiple sites
  • Scale up to support thousands of compute nodes and millions of VMs
  • Scale out to support hundreds to thousands of geographically dispersed locations
  • Integrated connectivity for internal resources, backbone networks and external resources (public clouds, containers, legacy infrastructure)
  • Complete topology mapping and modeling for data analytics
  • Compliance with and support for key industry initiatives to ensure business and operational interoperability (e.g., MANO)
  • Support for a wide variety of physical and virtual infrastructure models and devices

At CPLANE NETWORKS we’ve delivered that orchestration engine in a single, integrated platform – Multi-Site Manager (MSM). MSM was built from the ground up for cloud orchestration. We’ve leveraged the MSM platform to deliver OpenStack clouds that scale from micro-sites to mega-scale cloud centers. And since we deliver all of the requirements outlined above in a single platform, there’s no need to cobble together a bunch of disparate open sources projects and try to get them to work together, much less scale to your needs. MSM allows you to start small and grow quickly, so there’s little risk – and lots of reward.

CPLANE NETWORKS Multi-Site Manager Architecture

Learn How to Build Distributed Clouds with MSM

Download our case study on how a global service provider has leveraged Multi-Site Manager to power their new cloud service. View a short video to see MSM in action.