Tom Keane’s Comprehensive Guide to Edge Computing Solutions for Businesses

Photo of author

The growing importance of performant edge devices and systems – and the strategies and infrastructures that help them operate – will soon make them central and indispensable components of the operations of businesses, agencies, and industries around the world. Part I of this series laid out the case for business leaders to incorporate edge solutions in their businesses where it made sense to do so based on the guidance and experience of cloud computing pioneer Tom Keane. In the sections below, Part II breaks down what you must consider when formulating your edge strategy and the key considerations that must be borne in mind before finalizing an edge computing solution for a specific outcome or business use case.

Tom Keane Explains Important Considerations for Your Edge Needs

Developers and IT decision-makers should ask themselves a number of important questions before diving headlong into any edge computing initiative. Ideally, where should workloads be located? Is a move to the edge warranted? What edge computing models fit the business use cases and challenges at hand? Are there any mission-critical factors that must be addressed before deployment? Some of these questions – such as those regarding the business justification and strategic alignment to long-term business goals – must be answered by leaders such as C-level execs and department or regional heads. However, according to Tom Keane, when it comes to where the rubber hits the road, few have the level of insight, understanding, and operational experience with hands-on deployments and project rollouts as the development teams handling business-critical infrastructure.

That being said, Tom Keane recommends that tech and business leaders consider the following factors when developing their edge strategy.

Does your business have a need for a move to the edge?

Without a well-defined business need, there may be no reason to change how current operations and infrastructures are set up and operated. Even in the case of value-added changes that do not necessarily constitute the introduction of anything new such as new products or services, there is always room for new capabilities, such as leaner operations, shorter times to market, and increased efficiency. These can be formulated into desirable end-goals of the move to the edge. Desired business outcomes can therefore be anything from increased customer satisfaction and cost savings to enhanced agility, increased operational resilience, amplified worker productivity, and so on.

Tom Keane giving speech

The Types of Workloads in Question

The resources that perform specific business functions are called workloads. Common examples of workloads include development workloads, testing workloads, application hosting workloads, data sharing workloads, data backup and storage workloads, and business services workloads. Some of these workloads are well suited for being provided and managed on-premises, while others fit well in outsourced, off-premises, and/or cloud models. If the workload in question requires real-time or near real-time responses and quick decision-making, those workloads work better as edge workloads. If not, it may be more cost-effective to have those workloads managed, processed, or provided locally. Furthermore, you do not necessarily have to have an entire workload move to the edge. Instead, some workloads can be redesigned or refactored so that certain functions move to the edge while others stay onsite or in the cloud.

Think about the performance requirements.

Every application has a specific level of expected or required performance levels. Some KPIs may be required, and others may be good-to-haves. For example, while one application may require very low latency, another may require significant computing resources and storage capabilities. In general, edge computing is used to provide lower latency by moving closer to the consumption point. However, for applications that require substantial computing resources, it may be more cost-effective to move (or remain) away from the consumption point, i.e. in a non-edge location – but this will only work if latency is not an issue. These are the kinds of considerations that must be addressed before an edge strategy is finalized and/or the relevant network topology is built out.

What are the capacity constraints?

Different applications have different resource requirements; they will also have upper limits on capacity, such as the maximum transactions per second, bits per second, frames per second, operating temperatures, distance, etc. within which they can effectively operate. Tom Keane says that it is critical to consider both your current as well as future needs when it comes to the type and quantity of the resources and limits within which the desired system will be expected to function.

What are your connectivity needs?

Edge computing devices and applications will usually connect to multiple systems, so having stable and reliable connectivity is an essential factor to enable unified communications and seamless operations. Integrated connectivity models can provide plug-and-play capabilities without having to depend on the user’s connectivity channels. Similarly, customers would be free to use private versus public networks as needed. In the same vein, Tom Keane says that wireless options can be used to eliminate the need for additional dependencies (think of network ports and legacy interfaces), and if a new connectivity model that comes with shorter paths becomes available, it can easily be moved to.

Physical factors matter.

Tom Keane says that there are many physical factors such as space and power needs, cooling requirements, environmental factors, and others that play an important role in the type of edge devices and strategy that will work based on a given scenario. As capacity needs go up, the physical factors needed to scale should also be considered. Even factors such as aesthetics and sound factors that are sometimes overlooked should be kept in mind since they can potentially prevent systems from being deployed in the ideal location (for example, very near the consumption point).

What are the operational and management factors?

In today’s agile DevOps environment, having effective development and management capabilities alongside regular updates, application validation, and maintenance are critical for the long-term success of IT infrastructure. According to Tom Keane, when companies use a distributed application architecture in which certain services live on the edge and others in a cloud region, developing services in the cloud and moving key workloads to the edge is highly favorable since having the ability to update and improve services without hindrances (sometimes referred to as “truck rolls” – when a technician is dispatched to a customer to solve a problem) can vastly improve agility, response times, and overall customer experiences. Tom Keane says that this type of operational and managerial efficiency is often a key differentiating factor with edge computing services, especially when they are offered as managed services.

Consider the security and compliance requirements.

Edge computing resources send, receive, and process data from many different sensors and backend systems. Security and compliance with applicable data laws and best practices are important for all in-transit and in-store data and communications. Ensuring the security of edge devices can be challenging, both from the physical access perspective as well as in terms of interface security. When it comes to regulated industries such as healthcare – for which Tom Keane and Microsoft rolled out its Azure Cloud for Healthcare solution in 2020 – complying with regulations such as HIPAA requires careful and thoughtful consideration about physical and digital security.

Consider the business model.

How will the user maximize their edge investment and drive ROI? There are many choices available when it comes to edge computing solutions, such as build, buy, or partner. Users can avail of completely managed services or work with specialist vendors for infrastructure, platform, and software development to create an entirely custom solution. Based on his experience with Microsoft Azure, Tom Keane says that a business’s new edge initiative be developed based on the customer’s preferences and limitations (i.e., budgets) in terms of capital expenses and operating expenses. There are also many usage-based cost models available that make new edge projects possible without having to make substantial upfront investments.

Determine the total cost of ownership.

There are different ways that edge solutions can be implemented, and there are different models available from which to choose, each with its own cost structures. This is why calculating the Total Cost of Ownership (TCO) is so important. For example, while an edge gateway device may be expensive, it may be cheaper than performing a truck roll – especially if the business expands and it becomes harder to scale the old solution (a truck roll) instead of deploying and maintaining an otherwise expensive edge device.

By calculating the TCO and accounting for all initial investments, operational and management costs, service fees, business transformation costs, and other edge-related costs, not to mention attendant expenses such as power, cooling, and physical security needs, users can build an ROI model that works for them based on the context and use cases at hand.

Based on Tom Keane’s experience, all of these factors are important when selecting an edge computing model or strategy. All of them also have a direct impact on the user’s cost-benefit analysis. However, by carefully considering each factor, business and tech leaders will be better positioned to choose the edge strategy, model, devices, and capabilities that will help them achieve quantifiable business goals.

Tom Keane on Choosing the Right Edge Solutions

Tom Keane says that once you know your edge needs, where you want to go, and what you expect your solution to do, you can narrow down the choices by looking at the following.

Not all edge devices with the same capabilities are equally easy to manage and deploy. Tom Keane says that your goal should be to install the ideal edge system quickly and have it integrate as seamlessly with your present systems as possible. You also need to provide effective and independent operations while minimizing maintenance needs. Some solutions may provide these capabilities better than others, so an assessment of the ability of different vendors, applications, devices, and platforms to do what you need to do is critical.

Are redundancies and self-diagnostics available?

Having redundancies in place will help create a more rugged and robust edge solution, one that will continually support the availability of critical applications. Self-diagnostics help by making the edge solution optimize itself by, for example, shifting resources or loads if needed in response to flags, incidents, or CPU, memory, or disk failures – all without disrupting production. These capabilities ensure the continuity of operations and system uptime and availability.

Choose an edge solution that is flexible, scalable, and can adapt to many different applications.

Edge systems work as parts of a larger whole, and important standards that can limit the interoperability of systems, devices, applications, and other important infrastructures and resources can limit the usefulness of the edge solution. For example, using edge devices that have the Open Platform Communications (OPC) standard will provide you with a great deal of interoperability with other systems when it comes to the exchange of data. Tom Keane says that you can potentially future-proof your edge solution – at least for as long as you need to in order to maintain a competitive advantage while the industry continues to evolve and innovate – by baking in interoperability capabilities in your edge solution from today.

Choose an edge system that can multitask.

Some edge devices and solutions provide multiple benefits and perform multiple services all at once, such as rapid data collection and processing, real-time analysis, and round-the-clock monitoring and reporting – that too for multiple applications. If you have an edge application that helps you discover insights that you hadn’t considered, you can only benefit from those insights if you (and your infrastructure) can act on them. If you can reprogram or recalibrate deployed devices to take on new responsibilities or perform new actions as needed without having to substantially change your existing infrastructure, you can save significantly.

Final Thoughts

Tom Keane says that it is no wonder that there are so many edge devices, models, service providers, and infrastructure partners to choose from since the digital future that matters most today is increasingly becoming the one that takes place at the edge – where people and devices interact, where business occurs, where actions and processes take place, and where interesting events and important changes need to be recorded, measured, and analyzed. It is an exciting time for everyone in the IoT and edge spaces, and tomorrow’s edge leaders will be those who correctly assess and evaluate the present to build solutions, competitive advantages, and value for the future. By following Tom Keane’s playbook as provided above for honing the ideal edge strategy and then choosing the solution that works best for a given business use case, business and tech leaders can effectively fill that critical gap.