Chapter 9: Deployment Considerations¶
Automation¶
While the use of deep learning techniques can help automate feature engineering, a separate line of research has examined how to perform automated machine learning (AutoML). The work examines the use of optimization techniques to automate not only feature engineering and selection, but model archietecture and hyperparameter optimization as well. These tools have recently been used for model compression, image classification, and even bank failure prediction. To our knowledge, we are the first to explore the combination of AutoML and network traffic classification.
One implementation of AutoML is AutoGluon-Tabular, which performs feature selection, model selection, and hyperparameter optimization by searching through a set of base models. These models include deep neural networks, tree based methods such as random forests, non-parametric methods such as K-nearest neighbors, and gradient boosted tree methods. Beyond searching the singular models, AutoGluon-Tabular creates weighted ensemble models out of the base models to achieve higher performance than other AutoML tools in less overall training time.
AutoGluon-Tabular can perform feature selection, model search, and hyperparameter optimization for all eight problems we evaluate. AutoGluon has been shown to outperform many other public AutoML tools given the same data, and it is open source. While many AutoML tools search a set of models and corresponding hyperparameters, AutoGluon achieves higher performance by ensembling multiple single models that perform well. AutoGluon has a presets parameter that determines the speed of the training and size of the model versus the overall predictive quality of the models trained. One such setting is high_quality_fast_inference_only_refit, which produces models with high predictive accuracy and fast inference. There is a quality preset of “best quality’’ which can create models with slightly higher predictive accuracy, but at the cost of 10x-200x slower inference and 10x-200x higher disk usage. This setting can make sense in many networking scenarios where inference time is an important metric. The preset parameter for an AutoML tool does not represent the training of a single model, but an optimization of a set of models for a given task.
Model Drift¶
Taurus
…
Explainability¶
While deep learning models have achieved substantial improvements in performance for a variety of tasks, it has proven challenging to explain why these models behave as they do, especially to laypersons.
Improvements in explainability are essential to the continued development of machine learning. Accountability, fairness, trust, and reliability are all easier to achieve if models are more easily interrogated and interpreted by human developers and users.
Since explainability is a complex idea (what should be explainable? To whom?), it helps to break down it down into several possible goals for model development: NETWORKING EXAMPLES FOR ALL THREE CATEGORIES
First, we might prefer models that are decomposable, i.e. readily divided into smaller, easier to understand components. If each of these components can be explained in an straightforward manner, the behavior of the entire model may become more readily understood
Second, we might prefer models that are simulatable, i.e. that would allow a human to step through the processes employed by the model (in a feasible amount of time) and achieve a process-based understanding of the model’s inner workings.
Third, we might prefer models that are transparent, i.e. that provide guarantees about their behavior and how variations in inputs and parameters will affect outputs.
Unfortunately, the types of deep learning models that currently perform the best on many complex tasks are among the least interpretable computer programs ever developed on all three categories. Whether this will change in the near or distant future is a matter for the ML research community.
Fortunately, many ML models that can be applied effectively to networks data are substantially more explainable. Shallow ML methods (especially decision trees/forests and support vector machines) are quite explainable, and can rival the performance of neural networks on many networks tasks with lower-dimensional data (Chapter SUPERVISED). Even neural networks of modest size can often be visualized and instrumented such that the effect of input and parameter changes are more easily understood.