
The significant victory of AlphaGo, beating the world champion Lee Se-dol at the Chinese board game Go, has fascinated the entire world. This was a watershed moment for Artificial Intelligence (AI), because Go was considered the “holy grail of AI gaming” according to BBC News. It is much more complex than chess, meaning that AlphaGo could not rely on the “brute force” math that IBM’s Deep Blue supercomputer used to defeat chess grand master Garry Kasparov in 1997.
DeepMind, the British computer company now owned by Google that developed the AlphaGo program, says that there are more possible positions in Go than atoms in the universe. A Go player typically has a choice of 200 moves, while a chess player has about 20. How then did AlphaGo beat Lee Se-dol?
The answer is self-learning. Deep Blue prevailed in chess by calculating every possible move and outcome. Because of the almost infinite number of possible moves, AlphaGo had to triumph by using predictive analysis based on iterative learning.
DeepMind head Demis Hassabis calls this reinforcement learning: learning by trial and error and improving from mistakes to make better decisions. AlphaGo was trained with huge datasets of past Go matches, which helped it make predictive moves. Also, it played itself millions and millions of times, improving after every win/loss.
This self-learning is critical to SDN as well. The SDN controller is the brain of the network. Like the human brain, controllers must have super computational and predictive capabilities to adapt to ever-changing conditions. The network should not only react to failures but also be able to adapt to untimely demands, making decisions based on current and historical data.
Thus, the controller should “think,” facilitating networks that are self-healing and self-optimizing. Assuming reinforcement learning power is granted to controllers, they can teach themselves from the history of network behavior. With each decision, the controller will train and improve itself, ultimately reaching something analogous to human thinking. With such high capability, it will be possible to predict nearly everything and thus facilitate truly proactive network management.
Self-learning networks sound like a fairy tale of computer networks and, just like most fairy tales, there are hurdles that add suspense. Failures of physical hardware are always an issue. Also, automation involves huge computations, more CPUs and memory space, increasing the need for resources. However, similar to the challenge in Go, brute-force math is not enough for controllers to make our networks self-healing and self-optimizing. Controllers lack the right inputs for self-learning. As Brian Boyko wrote in a blog post last year:
“Real-time SDN analytics are critical to enabling engineers to make good decisions. They are also vital to allowing the network software itself to make good ‘decisions.’ If a link performs poorly, an SDN network can route around it – if it knows that the link is indeed performing poorly and what the next best route is. But if the information is incorrect or misleading, the computer will blithely go through its programming, making the ‘right’ decisions for the wrong scenario. Truth be told, a human being could also make the same mistake, given the same data, but computers have the ability to make billions of mistakes per second.”
This is where Packet Design’s SDN Platform comes into play, providing the critical analytics-based SDN orchestration layer between the physical and virtual network infrastructure and the applications that need network resources. This layer uses Packet Design’s unique real-time telemetry, analytics, path computation and optimization, and policy to enable intelligent provisioning of network services and accelerate service activation.
By capturing all IGP/BGP routing events, traffic flows, and the performance of key services, the platform creates historical models. These are used to calculate future states based on various conditions and business policies. Predictive analytics give operators accurate impact assessments of application requests for network resources, and the best way to provision them. If approved, the changes can be automated via an SDN controller with a single mouse-click. Real-time telemetry provides immediate, closed-loop feedback with no gaps in visibility.
Machine learning is key to both SDN and AI. The true potential of self-healing, self-optimizing networks can only be realized with a closed-loop management lifecycle of real-time network telemetry collection, analytics and optimization, and automated provisioning of network services. The industry is not there yet, but we at Packet Design are doing our part to make sure SDN controllers have the intelligence needed to make SDN viable (and valuable) for network operators worldwide.
Add comment