The COVID-19 worldwide pandemic taking place this spring truly demonstrates the value of cloud services combined with remote access to support businesses and workers who can’t operate in on-premises environments. Distributed cloud services in particular stand poised to become the true foundation of efficient and well-managed business operations.

Distributed cloud, according to Gartner, “is the distribution of public cloud services to different physical locations, while the operation, governance, updates and evolution of the services are the responsibility of the originating public cloud provider.”

I spoke with Bindu Sundaresan, director of AT&T Cybersecurity, Ankur Singla, CEO of Volterra, a distributed cloud service platform, and a few other industry experts to get their takes on what’s happening in this space.

SEE: Top 100+ tips for telecommuters and managers (free PDF) (TechRepublic)

Scott Matteson: How is the distributed cloud changing how infrastructure and applications are managed?

Ankur Singla: Due to the exponential increase of data-driven technologies–think artificial intelligence, the Internet of Things, and 5G–apps and data, along with their supporting infrastructure, are increasingly spread across edge sites and multiple clouds. These distributed workloads introduce several serious operational and security challenges for organizations. Specifically, IT teams are struggling to securely, reliably, and cost-effectively manage these workloads. What’s more, these challenges will only continue to grow. By 2025, up to 90% of enterprise-generated data will be produced and processed outside traditional data centers or a single centralized cloud.

The distributed cloud is an emerging approach that will enable organizations to manage disparate components of its enterprise IT infrastructure as one unified, logical cloud. As organizations can deploy apps with a common set of policies and overarching visibility across all locations and heterogeneous infrastructure, using a cloud-native model, the distributed cloud mitigates the aforementioned operational challenges. This is why Gartner named distributed cloud one of its “Top 10 Strategic Technology Trends for 2020.”

Bindu Sundaresan: In today’s digital business environment, businesses have become more decentralized and mobile to fulfil business objectives and remain competitive. This has given rise to a distributed cloud model, ultimately streamlining location, regulation, and security implications, while reducing latency and large-scale outages. In fact, by 2023, Gartner predicts that most cloud service providers will have a distributed ATM-like presence to serve a subset of their services for low-latency application requirements. In order to manage such distributed cloud environments, we’ll see an emergence of micro data centers located in areas that see high amounts of traffic, in parallel with the ATM-like cloud service points.

The surge of distributed cloud brings challenges for teams tasked with optimizing, managing and protecting the infrastructure. Legacy networks, which were designed for a centralized world, are not sufficient to handle the amount of traffic that cloud-based applications create. And although traditional firewalls help to protect against traffic flowing into the data center or other physical locations, they cannot provide visibility or security for remote users that connect directly to the Internet or cloud-based resources.

Scott Matteson: How will the distributed cloud impact hybrid cloud and multi-cloud projects?

Ankur Singla: As noted above, the rise of hybrid cloud deployments, and especially multi-cloud deployments, is a big part of what’s causing the need for the distributed cloud. In fact, a recent study by Propeller Insights found that IT leaders found secure and reliable connectivity between providers; different support and consulting processes; and different platform services, the biggest challenges in managing workloads across different cloud providers. The distributed cloud helps manage all disparate computing environments as a single logical entity, improving the performance, reliability, and manageability of multi-cloud deployments.

SEE: How to trim your cloud infrastructure costs (TechRepublic)

Bindu Sundaresan: The effect of decentralization for all-important cloud access is an issue that can quickly become untenable, increasing latency that erodes even more for users and offices geographically remote from the data center. Because today’s organizations have become reliant on cloud-based applications, the risk of being locked out of the very thing on which their business depends is increased. Many organizations are attempting to address this performance issue and add resiliency to their network by connecting their branch offices and remote users directly to the internet utilizing multiple network circuits and SD-WAN, bypassing the data center altogether when accessing cloud-based applications.

Scott Matteson: How does the edge fit in?

Ankur Singla: In addition to the rise of multi-cloud deployments, the sharp increase in edge computing deployments is causing apps and their infrastructure to be more highly distributed than ever before. Propeller Insight’s survey data found that IT execs identified difficulty in managing apps across multiple edge locations and an inability to accommodate the IT infrastructure needed to host and operate at the edge, as the biggest business concerns about having apps at the edge.

Rather than managing edge deployments disparately across many different sites, using a dated, siloed approach and a bevy of different tools, DevOps and NetOps teams want to be able to operate apps in a more unified, cloud-like fashion regardless of where the workloads are actually located. To meet this need, there’s an ongoing effort to “cloudify the edge.” The cloudified edge will enable organizations to operate and manage apps and data across different locations and infrastructures, including providing consistent and integrated compute, storage, networking, and security resources for distributed edge locations. The effort to cloudify the edge is part of the larger evolution toward the distributed cloud.

Scott Matteson: What are the advantages for users and administrators?

Ankur Singla: Major issues with security, connectivity, reliability, and performance, and inconsistent service offerings across providers make it difficult for users and administrators to efficiently deploy and operate multi-cloud deployments. Meanwhile, edge deployments also face serious challenges, with managing infrastructure and apps across numerous edge sites posing potential barriers to success. Additionally, IT teams are concerned they lack the resources, both workforce and technical, to keep edge applications and infrastructure updated long-term.

SEE: Cloud computing: Microsoft signs new discount Azure deal with UK government (TechRepublic)

There’s a major market need for enterprises to bring a consistent cloud operational environment–wherever they are running their applications–to address these concerns. The distributed cloud bridges those gaps and enables organizations to easily operate apps via a cloud-native environment.

Bindu Sundaresan:

  • Speed and location: Such a model enables a faster, more responsive delivery of certain applications, with minimal latency even when transferring bulk data
  • Regulatory compliance: With emerging, regional compliance regulations like GDPR and CCPA, distributed cloud environments can provide that data does not leave the user’s country
  • Reduced outages: A distributed cloud model can help mitigate large scale outages that can tarnish an organization’s reputation.

Scott Matteson: What are the security implications?

Ankur Singla: While there are many service providers that enable connecting and securing consumers-to-applications (Akamai, Cloudflare, etc.) and employees-to-applications (e.g. Zscaler, Netskope, etc.), there isn’t a service provider that delivers on the need for application-to-application connectivity and security, which is critical for distributed environments.

While cloud providers have provided a lot of tools to deal with these problems, integrating and maintaining these tools is not easy. Securing apps and data in a heterogeneous environment (edge, multiple public clouds, and private clouds) requires organizations to address a multi-layer security problem—identity, authentication, and authorization, secrets, and key management–using multiple sets of tools/vendors. Unfortunately, this approach is prohibitively expensive. Moreover, the evolving security landscape and new technologies make it even harder for these teams, as they don’t have the necessary expertise or the bandwidth to keep up with all the changes.

The distributed cloud enables organizations to easily implement consistent networking, reliability, and security services, including API routing, load balancing, security, and network routing, across disparate application clusters and locations, with a consistent configuration and operational model. Using this approach, organizations are able to deploy high-performance, reliable, and secure connectivity between multiple cloud provider locations and in resource-constrained edge locations.

SEE: The state of the cloud in 2020: Public, multicloud dominates but waste spending is high (TechRepublic)

Bindu Sundaresan: In legacy cloud architecture, there was one way in and one way out of the network. But with the distributed model, there are now many network breakouts; sometimes even hundreds or thousands across a wide geographical area. Each of these connections to the internet represents an avenue for attack, and therefore, must be addressed in regard to security.

Cloud-based Secure Web Gateways (SWGs) offer administrators a way of applying unified security policies across all of their users, virtually anywhere that they conduct business and provide centralized visibility so they can remain informed about what activities are taking place on their network. Together with SD-WAN, this overcomes the traditional woe that users suffer lower performance and weaker security simply because they are farther from the network’s core.

SWGs help to protect users against web-based threats by restricting what content can be accessed. They also offer a solution for organizations to perform deep packet inspection of encrypted web traffic with minimal effect on network performance. A cloud-based architecture also scales more easily because capacity can be added without the need to buy expensive security equipment as businesses expand, whether that be adding new offices, integrating company acquisitions, or conducting mergers.

Scott Matteson: What kind of organizations should prioritize distributed cloud initiatives this year?

Ankur Singla: Managing, supporting, and securing the increasing number of applications deployed in multiple clouds and at the edge is a hefty challenge, but the distributed cloud can address those issues. Early adopters of this new approach include organizations in markets like financial services, telecom, e-commerce, retail, healthcare, manufacturing, and more. For example, with the full-scale introduction of 5G, it is expected that the spread of IoT services will connect everything from automobiles to home appliances to industrial machines. As IoT requires large amounts of data with minimal latency, there is a strong need for distributed platforms that can provide cloud-native computing, networking, and security at the original data source.

While the fully distributed cloud won’t arrive for a few years, critical milestones on that journey will start to unfold in 2020. Specifically, any organization that is planning to leverage either multi-cloud or edge deployments, or is already leveraging either, should start taking initiatives to support the distributed cloud,for example, by implementing platforms like Volterra. Within the next 2-3 years, we will see all pieces come together, bringing order to chaos and giving birth to the distributed cloud.

Scott Matteson: How is the current COVID-19 pandemic impacting this whole picture? I would imagine distributed cloud would become even more of a prized commodity now that so many workplaces are embracing remote connectivity?

SEE: Coronavirus: What business pros need to know (TechRepublic)

Ankur Singla: That’s exactly right. Corporations have set SLAs with their service providers that guarantee them a high standard of performance, reliability, security, etc., for anything IT-related that’s done within their offices or data centers. For example, a normal office worker is guaranteed by the company’s telecom provider a certain level of bandwidth when they connect from corporate Wi-Fi across the wide-area network. Similarly, a developer will be guaranteed a level of speed and security when they connect from their corporate data center network to a public cloud. But all those SLAs and guarantees go out the window when people are working from home on their consumer networks, which were never built to handle this much traffic.

A distributed cloud approach helps alleviate this issue by allowing remote workers to leverage a global network of disaggregated cloud resources that are located closer to the workers, giving them much better performance and thus employee productivity.

Bindu Sundaresan: As organizations shift to a remote business environment, we can expect an increase in companies adopting a distributed cloud model. Organizations are motivated by the need for digital transformation amid COVID-19 conditions, and the distributed cloud model will help them arrive there faster. However, it’s also important to understand the security implications that such adoption can bring. The future environment will look more dynamic and adaptable with a prioritization of employee productivity.

Through the AT&T Alien Labs Open Threat Exchange, we have seen reports of COVID-19-related malicious activity globally from established attack groups as well as opportunistic attackers. In the last few weeks, US targeted attacks have increased significantly, so security will need to be a continued priority for organizations utilizing a distributed cloud model.

Ivan Fioravanti, CTO of CoreView, a Microsoft 365 solutions provider, also weighed in:

“Data center infrastructure has shown its limits during the COVID-19 emergency, when the number of remote workers increased in a matter of days and centralized VPN access was not able to scale properly. Usually, companies have direct large-bandwidth connections to their main data centers from their main office locations, but this proved to be useless once the pandemic quarantine went into place. A multitude of users started to access systems from different locations, connection speeds, and devices, disrupting the standard usage that a data center is designed for.

“Cloud usage will increase, hybrid cloud will become the new norm for infrastructure and for services. 100 percent of XaaS companies will succeed, 100 percent of on-premise companies will fail. Federation of services to enable more collaboration in and out of companies is a must to increase productivity and revenues. This can only be achieved at scale with public infrastructure and companies dedicated to ensuring this critical service is available to other organizations. Data centers will be seen like electricity generators: In the past, they were the main source of energy but will serve as the backup in case of failure of the new main source, the public electric company.”

Colin Metcalfe, SOC operations manager for Security as a Service provider Cygilant, had this to add to the topic:

“Like all things in life, there will be winners and losers in all of this. I expect to see an increase in large-scale ‘Hyperscale’ data centers as more companies adopt the cloud for their environments.

“However, this will come at a cost for the smaller-scale offerings, as the reduction in their customer base and revenue makes it less viable for them to continue.

SEE: IT directors plan to protect cloud budgets and consolidate vendors during downturn (TechRepublic)

This will push the real battle into the middle ground, the medium-sized data centers, located inside city limits, who cannot scale to match their larger competitors. I expect them to stave off their fate for a while as they can adapt the space they have to accommodate greater cloud infrastructure in place of the customer equipment, which will eventually dwindle away.

This middle-ground battle will be fought hard and won by those who can adapt the quickest and offer their customers the most flexibility in service and support.”

Finally, Dave Mariani, co-founder and chief strategy officer at data virtualization provider AtScale, said:

“When enterprise IT teams are deploying new systems within their organization, they’re most likely going to be in the cloud. However, cloud transformation is a challenging multi-year process for even the most nimble of enterprises, and there are doubts on whether or not all data will live in the cloud for the foreseeable future. This can be a nightmare for IT teams who have to bridge the gap with siloed, fragmented data between the legacy systems staying in place and newer cloud technologies deployed.

“The challenge for the enterprise in this very common scenario becomes how to marry a massive amount of siloed data without re-engineering their entire IT architecture–and at the same time, give the right data to the right people–fast, so they can make decisions that drive the business. To solve this challenge, businesses need a solution that intelligently virtualizes all their siloed data into a single, unified data view from which a variety of business intelligence (BI) tools can get fast, consistent answers.

“Data virtualization has existed for many years, but only recently have we developed new capabilities that enable companies to leverage disparate legacy and modern data across the hybrid cloud, bringing it together for BI teams and the greater business.”


Image: gorodenkoff Getty Images/iStockphoto