Transcription

NetAdapt: Platform-Aware Neural NetworkAdaptation for Mobile ApplicationsTien-Ju Yang1 [0000 0003 4728 0321] , Andrew Howard2 , Bo Chen2 ,Xiao Zhang2 , Alec Go2 , Mark Sandler2 , Vivienne Sze1 , and Hartwig Adam21Massachusetts Institute of Technology2Google Inc.{tjy,sze}@mit.edu, gle.comAbstract. This work proposes an algorithm, called NetAdapt, that automatically adapts a pre-trained deep neural network to a mobile platform given a resource budget. While many existing algorithms simplifynetworks based on the number of MACs or weights, optimizing thoseindirect metrics may not necessarily reduce the direct metrics, such aslatency and energy consumption. To solve this problem, NetAdapt incorporates direct metrics into its adaptation algorithm. These direct metricsare evaluated using empirical measurements, so that detailed knowledgeof the platform and toolchain is not required. NetAdapt automaticallyand progressively simplifies a pre-trained network until the resource budget is met while maximizing the accuracy. Experiment results show thatNetAdapt achieves better accuracy versus latency trade-offs on both mobile CPU and mobile GPU, compared with the state-of-the-art automated network simplification algorithms. For image classification on theImageNet dataset, NetAdapt achieves up to a 1.7 speedup in measuredinference latency with equal or higher accuracy on MobileNets (V1&V2).1IntroductionDeep neural networks (DNNs or networks) have become an indispensable component of artificial intelligence, delivering near or super-human accuracy on common vision tasks such as image classification and object detection. However,DNN-based AI applications are typically too computationally intensive to bedeployed on resource-constrained platforms, such as mobile phones. This hinders the enrichment of a large set of user experiences.A significant amount of recent work on DNN design has focused on improvingthe efficiency of networks. However, the majority of works are based on optimizing the “indirect metrics”, such as the number of multiply-accumulate operations(MACs) or the number of weights, as proxies for the resource consumption ofa network. Although these indirect metrics are convenient to compute and integrate into the optimization framework, they may not be good approximationsto the “direct metrics” that matter for the real applications such as latency This work was done while Tien-Ju Yang was an intern at Google.

T.-J. Yang et al.PlatformBudgetLatency3.8Empirical Measurements10.5Latency MetricEnergyEnergyNetAdapt# LayersAAdaptedNetworkProposal A Proposal Z15.6 14.3 4641 Budget Metric PretrainedNetwork 2Network ProposalsBCDMeasureZ # FiltersFig. 1. NetAdapt automatically adapts a pretrained network to a mobile platformgiven a resource budget. This algorithm is guided by the direct metrics for resourceconsumption. NetAdapt eliminates the requirement of platform-specific knowledge byusing empirical measurements to evaluate the direct metrics. At each iteration, NetAdapt generates many network proposals and measures the proposals on the targetplatform. The measurements are used to guide NetAdapt to generate the next set ofnetwork proposals at the next iteration.and energy consumption. The relationship between an indirect metric and thecorresponding direct metric can be highly non-linear and platform-dependent asobserved by [15, 25, 26]. In this work, we will also demonstrate empirically thata network with a fewer number of MACs can be slower when actually runningon mobile devices; specifically, we will show that a network of 19% less MACsincurs 29% longer latency in practice (see Table 1).There are two common approaches to designing efficient network architectures. The first is designing a single architecture with no regard to the underlyingplatform. It is hard for a single architecture to run optimally on all the platformsdue to the different platform characteristics. For example, the fastest architecture on a desktop GPU may not be the fastest one on a mobile CPU with thesame accuracy. Moreover, there is little guarantee that the architecture couldmeet the resource budget (e.g., latency) on all platforms of interest. The secondapproach is manually crafting architectures for a given target platform basedon the platform’s characteristics. However, this approach requires deep knowledge about the implementation details of the platform, including the toolchains,the configuration and the hardware architecture, which are generally unavailablegiven the proprietary nature of hardware and the high complexity of modern systems. Furthermore, manually designing a different architecture for each platformcan be taxing for researchers and engineers.In this work, we propose a platform-aware algorithm, called NetAdapt, toaddress the aforementioned issues and facilitate platform-specific DNN deploy-

NetAdapt3ment. NetAdapt (Fig. 1) incorporates direct metrics in the optimization loop, soit does not suffer from the discrepancy between the indirect and direct metrics.The direct metrics are evaluated by the empirical measurements taken from thetarget platform. This enables the algorithm to support any platform withoutdetailed knowledge of the platform itself, although such knowledge could still beincorporated into the algorithm to further improve results. In this paper, we uselatency as the running example of a direct metric and resource to target eventhough our algorithm is generalizable to other metrics or a combination of them(Sec. 4.3).The network optimization of NetAdapt is carried out in an automatic way togradually reduce the resource consumption of a pretrained network while maximizing the accuracy. The optimization runs iteratively until the resource budgetis met. Through this design, NetAdapt can generate not only a network thatmeets the budget, but also a family of simplified networks with different tradeoffs, which allows dynamic network selection and further study. Finally, insteadof being a black box, NetAdapt is designed to be easy to interpret. For example, through studying the proposed network architectures and the correspondingempirical measurements, we can understand why a proposal is chosen and thissheds light on how to improve the platform and network design.The main contributions of this paper are:– A framework that uses direct metrics when optimizing a pretrained networkto meet a given resource budget. Empirical measurements are used to evaluate the direct metrics such that no platform-specific knowledge is required.– An automated constrained network optimization algorithm that maximizesaccuracy while satisfying the constraints (i.e., the predefined resource budget). The algorithm outperforms the state-of-the-art automatic network simplification algorithms by up to 1.7 in terms of reduction in measured inference latency while delivering equal or higher accuracy. Moreover, a familyof simplified networks with different trade-offs will be generated to allowdynamic network selection and further study.– Experiments that demonstrate the effectiveness of NetAdapt on differentplatforms and on real-time-class networks, such as the small MobileNetV1,which is more difficult to simplify than larger networks.2Related WorkThere is a large body of work that aims to simplify DNNs. We refer the readersto [21] for a comprehensive survey, and summarize the main approaches below.The most related works are pruning-based methods. [6, 14, 16] aim to removeindividual redundant weights from DNNs. However, most platforms cannot fullytake advantage of unstructured sparse filters [26]. Hu et al. [10] and Srinivas etal. [20] focus on removing entire filters instead of individual weights. The drawback of these methods is the requirement of manually choosing the compressionrate for each layer. MorphNet [5] leverages the sparsifying regularizers to automatically determine the layerwise compression rate. ADC [8] uses reinforcement

4T.-J. Yang et al.learning to learn a policy for choosing the compression rates. The crucial difference between all the aforementioned methods and ours is that they are notguided by the direct metrics, and thus may lead to sub-optimal performance, aswe see in Sec. 4.3.Energy-aware pruning [25] uses an energy model [24] and incorporates theestimated energy numbers into the pruning algorithm. However, this requires designing models to estimate the direct metrics of each target platform, which requires detailed knowledge of the platform including its hardware architecture [3],and the network-to-array mapping used in the toolchain [2]. NetAdapt does nothave this requirement since it can directly use empirical measurements.DNNs can also be simplified by approaches that involve directly designing efficient network architectures, decomposition or quantization. MobileNets [9, 18]and ShuffleNets [27] provide efficient layer operations and reference architecturedesign. Layer-decomposition-based algorithms [13, 23] exploit matrix decomposition to reduce the number of operations. Quantization [11, 12, 17] reducesthe complexity by decreasing the computation accuracy. The proposed algorithm, NetAdapt, is complementary to these methods. For example, NetAdaptcan adapt MobileNets to further push the frontier of efficient networks as shownin Sec. 4 even though MobileNets are more compact and much harder to simplifythan the other larger networks, such as VGG [19].3Methodology: NetAdaptWe propose an algorithm, called NetAdapt, that will allow a user to automatically simplify a pretrained network to meet the resource budget of a platformwhile maximizing the accuracy. NetAdapt is guided by direct metrics for resourceconsumption, and the direct metrics are evaluated by using empirical measurements, thus eliminating the requirement of detailed platform-specific knowledge.3.1Problem FormulationNetAdapt aims to solve the following non-convex constrained problem:maximizeN etAcc(N et)subject to Resj (N et) Budj , j 1, . . . , m,(1)where N et is a simplified network from the initial pretrained network, Acc(·)computes the accuracy, Resj (·) evaluates the direct metric for resource consumption of the j th resource, and Budj is the budget of the j th resource andthe constraint on the optimization. The resource can be latency, energy, memoryfootprint, etc., or a combination of these metrics.Based on an idea similar to progressive barrier methods [1], NetAdapt breaksthis problem into the following series of easier problems and solves it iteratively:maximizeN etiAcc(N eti )subject to Resj (N eti ) Resj (N eti 1 ) Ri,j , j 1, . . . , m,(2)

NetAdapt5Algorithm 1: NetAdapt123456789101112Input: Pretrained Network: N et0 (with K CONV and FC layers), ResourceBudget: Bud, Resource Reduction Schedule: RiOutput: Adapted Network Meeting the Resource Budget: Nˆeti 0;Resi TakeEmpiricalMeasurement(N eti );while Resi Bud doCon Resi - Ri ;for k from 1 to K do/* TakeEmpiricalMeasurement is also called insideChooseNumFilters for choosing the correct number of filtersthat satisfies the constraint (i.e., current budget). */N F iltk , Res Simpk ChooseNumFilters(N eti , k, Con);N et Simpk ChooseWhichFilters(N eti , k, N F iltk );N et Simpk ShortTermFineTune(N et Simpk );N eti 1 , Resi 1 PickHighestAccuracy(N et Simp: , Res Simp: );i i 1;Nˆet LongTermFineTune(N eti );return Nˆet;where N eti is the network generated by the ith iteration, and N et0 is the initialpretrained network. As the number of iterations increases, the constraints (i.e.,current resource budget Resj (N eti 1 ) Ri,j ) gradually become tighter. Ri,j ,which is larger than zero, indicates how much the constraint tightens for the j thresource in the ith iteration and can vary from iteration to iteration. This isreferred to as “resource reduction schedule”, which is similar to the concept oflearning rate schedule. The algorithm terminates when Resj (N eti 1 ) Ri,jis equal to or smaller than Budj for every resource type. It outputs the finaladapted network and can also generate a sequence of simplified networks (i.e.,the highest accuracy network from each iteration N et1 , ., N eti ) to provide theefficient frontier of accuracy and resource consumption trade-offs.3.2Algorithm OverviewFor simplicity, we assume that we only need to meet the budget of one resource,specifically latency. One method to reduce the latency is to remove filters fromthe convolutional (CONV) or fully-connected (FC) layers. While there are otherways to reduce latency, we will use this approach to demonstrate NetAdapt.The NetAdapt algorithm is detailed in pseudo code in Algorithm 1 and inFig. 2. Each iteration solves Eq. 2 by reducing the number of filters in a singleCONV or FC layer (the Choose # of Filters and Choose Which Filtersblocks in Fig. 2). The number of filters to remove from a layer is guided byempirical measurements. NetAdapt removes entire filters instead of individualweights because most platforms can take advantage of removing entire filters,

6T.-J. Yang et al.Pretrained NetworkChoose# of FiltersChoose# of Filters.ChooseWhich FiltersChooseWhich FiltersShort-TermFine-TuneLayer 1OverBudgetMeasureShort-TermFine-TunePick HighestAccuracyLayer KPlatformWithin BudgetLong-TermFine-TuneAdapted NetworkFig. 2. This figure visualizes the algorithm flow of NetAdapt. At each iteration, NetAdapt decreases the resource consumption by simplifying (i.e., removing filters from)one layer. In order to maximize accuracy, it tries to simplify each layer individuallyand picks the simplified network that has the highest accuracy. Once the target budgetis met, the chosen network is then fine-tuned again until convergence.and this strategy allows reducing both filters and feature maps, which play animportant role in resource consumption [25]. The simplified network is thenfine-tuned for a short length of time in order to restore some accuracy (theShort-Term Fine-Tune block).In each iteration, the previous three steps (highlighted in bold) are applied oneach of the CONV or FC layers individually3 . As a result, NetAdapt generatesK (i.e., the number of CONV and FC layers) network proposals in one iteration,each of which has a single layer modified from the previous iteration. The networkproposal with the highest accuracy is carried over to the next iteration (thePick Highest Accuracy block). Finally, once the target budget is met, thechosen network is fine-tuned again until convergence (the Long-Term FineTune block).3.3Algorithm DetailsThis section describes the key blocks in the NetAdapt algorithm (Fig. 2).Choose Number of Filters This step focuses on determining how manyfilters to preserve in a specific layer based on empirical measurements. NetAdaptgradually reduces the number of filters in the target layer and measures theresource consumption of each of the simplified networks. The maximum number3The algorithm can also be applied to a group of multiple layers as a single unit(instead of a single layer). For example, in ResNet [7], we can treat a residual blockas a single unit to speed up the adaptation process.

NetAdaptLayer 2# Channels1 2 32 4 6 82 1 3 52 1 2 3 44 2 4 66 3 5 78 4 6 8# Filters# FiltersLayer 1# Channels7CatLayer 26 FiltersLayer 14 Filters4 2 3 4 56 3 4 5 68 4 5 6 7Latency6 4 10 msFig. 3. This figure illustrates how layer-wise look-up tables are used for fast resourceconsumption estimation.of filters that can satisfy the current resource constraint will be chosen. Notethat when some filters are removed from a layer, the associated channels in thefollowing layers should also be removed. Therefore, the change in the resourceconsumption of other layers needs to be factored in.Choose Which Filters This step chooses which filters to preserve based onthe architecture from the previous step. There are many methods proposed inthe literature, and we choose the magnitude-based method to keep the algorithmsimple. In this work, the N filters that have the largest ℓ2-norm magnitude willbe kept, where N is the number of filters determined by the previous step. Morecomplex methods can be adopted to increase the accuracy, such as removing thefilters based on their joint influence on the feature maps [25].Short-/Long-Term Fine-Tune Both the short-term fine-tune and longterm fine-tune steps in NetAdapt involve network-wise end-to-end fine-tuning.Short-term fine-tune has fewer iterations than long-term fine-tune.At each iteration of the algorithm, we fine-tune the simplified networks witha relatively smaller number of iterations (i.e., short-term) to regain accuracy, inparallel or in sequence. This step is especially important while adapting smallnetworks with a large resource reduction because otherwise the accuracy willdrop to zero, which can cause the algorithm to choose the wrong network proposal.As the algorithm proceeds, the network is continuously trained but does notconverge. Once the final adapted network is obtained, we fine-tune the networkwith more iterations until convergence (i.e., long-term) as the final step.3.4Fast Resource Consumption EstimationAs mentioned in Sec. 3.3, NetAdapt uses empirical measurements to determinethe number of filters to keep in a layer given the resource constraint. In theory,we can measure the resource consumption of each of the simplified networkson the fly during adaptation. However, taking measurements can be slow anddifficult to parallelize due to the limited number of available devices. Therefore,it may be prohibitively expensive and become the computation bottleneck.

8T.-J. Yang et al.Real Latency (ms)140120100806040200020406080100120140Estimated Latency (ms)Fig. 4. The comparison between the estimated latency (using layer-wise look-up tables)and the real latency on a single large core of Google Pixel 1 CPU while adapting the100% MobileNetV1 with the input resolution of 224 [9].We solve this problem by building layer-wise look-up tables with pre-measuredresource consumption of each layer. When executing the algorithm, we look upthe table of each layer, and sum up the layer-wise measurements to estimatethe network-wise resource consumption, which is illustrated in Fig. 3. The reason for not using a network-wise table is that the size of the table will growexponentially with the number of layers, which makes it intractable for deepnetworks. Moreover, layers with the same shape and feature map size only needto be measured once, which is common for modern deep networks.Fig. 4 compares the estimated latency (the sum of layer-wise latency from thelayer-wise look-up tables) and the real latency on a single large core of GooglePixel 1 CPU while adapting the 100% MobileNetV1 with the input resolution of224 [9]. The real and estimated latency numbers are highly correlated, and thedifference between them is sufficiently small to be used by NetAdapt.4Experiment ResultsIn this section, we apply the proposed NetAdapt algorithm to MobileNets [9, 18],which are designed for mobile applications, and experiment on the ImageNetdataset [4]. We did not apply NetAdapt on larger networks like ResNet [7] andVGG [19] because networks become more difficult to simplify as they becomesmaller; these networks are also seldom deployed on mobile platforms. We benchmark NetAdapt against three state-of-the-art network simplification methods:– Multipliers [9] are simple but effective methods for simplifying networks.Two commonly used multipliers are the width multiplier and the resolution multiplier; they can also be used together. Width multiplier scales thenumber of filters by a percentage across all convolutional (CONV) and fullyconnected (FC) layers, and resolution multiplier scales the resolution of theinput image. We use the notation “50% MobileNetV1 (128)” to denote applying a width multiplier of 50% on MobileNetV1 with the input imageresolution of 128.

NetAdapt9– MorphNet [5] is an automatic network simplification algorithm based onsparsifying regularization.– ADC [8] is an automatic network simplification algorithm based on reinforcement learning.We will show the performance of NetAdapt on the small MobileNetV1 (50%MobileNetV1 (128)) to demonstrate the effectiveness of NetAdapt on real-timeclass networks, which are much more difficult to simplify than larger networks.To show the generality of NetAdapt, we will also measure its performance onthe large MobileNetV1 (100% MobileNetV1 (224)) across different platforms.Lastly, we adapt the large MobileNetV2 (100% MobileNetV2 (224)) to push thefrontier of efficient networks.4.1Detailed Settings for MobileNetV1 ExperimentsWe perform most of the experiments and study on MobileNetV1 and detail thesettings in this section.NetAdapt Configuration MobileNetV1 [9] is based on depthwise separableconvolutions, which factorize a m m standard convolution layer into a m mdepthwise layer and a 1 1 standard convolution layer called a pointwise layer. Inthe experiments, we adapt each depthwise layer with the corresponding pointwiselayer and choose the filters to keep based on the pointwise layer. When adaptingthe small MobileNetV1 (50% MobileNetV1 (128)), the latency reduction ( Ri,jin Eq. 2) starts at 0.5 and decays at the rate of 0.96 per iteration. When adaptingother networks, we use the same decay rate but scale the initial latency reductionproportional to the latency of the initial pretrained network.Network Training We preserve ten thousand images from the trainingset, ten images per class, as the holdout set. The new training set without theholdout images is used to perform short-term fine-tuning, and the holdout set isused to pick the highest accuracy network out of the simplified networks at eachiteration. The whole training set is used for the long-term fine-tuning, which isperformed once in the last step of NetAdapt.Because the training configuration can have a large impact on the accuracy,we apply the same training configuration to all the networks unless otherwisestated to have a fairer comparison. We adopt the same training configuration asMorphNet [5] (except that the batch size is 128 instead of 96). The learning ratefor the long-term fine-tuning is 0.045 and that for the short-term fine-tuning is0.0045. This configuration improves ADC network’s top-1 accuracy by 0.3% andalmost all multiplier networks’ top-1 accuracy by up to 3.8%, except for one datapoint, whose accuracy is reduced by 0.2%. We use these numbers in the followinganalysis. Moreover, all accuracy numbers are reported on the validation set toshow the true performance.Mobile Inference and Latency Measurement We use Google’s TensorFlow Lite engine [22] for inference on a mobile CPU and Qualcomm’s Snapdragon Neural Processing Engine (SNPE) for inference on a mobile GPU. Forexperiments on mobile CPUs, the latency is measured on a single large core of

10T.-J. Yang et al.59%Top-1 Accuracy57%55%53%Multipliers. Faster.3% Higher Accurac51%49%MorphNetNetAdapt47%. Faster.3% Higher Accurac45%43%41%3579Latency (ms)1113Fig. 5. The figure compares NetAdapt (adapting the small MobileNetV1) with themultipliers [9] and MorphNet [5] on a mobile CPU of Google Pixel 1.Google Pixel 1 phone. For experiments on mobile GPUs, the latency is measuredon the mobile GPU of Samsung Galaxy S8 with SNPE’s benchmarking tool. Foreach latency number, we report the median of 11 latency measurements.4.2Comparison with Benchmark AlgorithmsAdapting Small MobileNetV1 on a Mobile CPU In this experiment, weapply NetAdapt to adapt the small MobileNetV1 (50% MobileNetV1 (128)) toa mobile CPU. It is one of the most compact networks and achieves real-timeperformance. It is more challenging to simplify than other larger networks (include the large MobileNetV1). The results are summarized and compared withthe multipliers [9] and MorphNet [5] in Fig. 5. We observe that NetAdapt outperforms the multipliers by up to 1.7 faster with the same or higher accuracy.For MorphNet, NetAdapt’s result is 1.6 faster with 0.3% higher accuracy.Adapting Large MobileNetV1 on a Mobile CPU In this experiment, weapply NetAdapt to adapt the large MobileNetV1 (100% MobileNetV1 (224))on a mobile CPU. It is the largest MobileNetV1 and achieves the highest accuracy. Because its latency is approximately 8 higher than that of the smallMobileNetV1, we scale the initial latency reduction by 8 . The results are shownand compared with the multipliers [9] and ADC [8] in Fig. 6. NetAdapt achieveshigher accuracy than the multipliers and ADC while increasing the speed by1.4 and 1.2 , respectively.While the training configuration is kept the same when comparing to thebenchmark algorithms discussed above, we also show in Fig. 6 that the accuracyof the networks adapted using NetAdapt can be further improved with a bettertraining configuration. After simply adding dropout and label smoothing, theaccuracy can be increased by 1.3%. Further tuning the training configurationfor each adapted network can give higher accuracy numbers, but it is not thefocus of this paper.

NetAdapt1172%Top-1 Accuracy71%70%Multipliers.4 Faster. % Higher Accurac69%ADC. Faster.4% Higher Accurac68%NetAdapt67%NetAdapt (BetterTraining Config.)66%65%64%30507090110130Latency (ms)Fig. 6. The figure compares NetAdapt (adapting the large MobileNetV1) with themultipliers [9] and ADC [8] on a mobile CPU of Google Pixel 1. Moreover, the accuracyof the adapted networks can be further increased by up to 1.3% through using a bettertraining configuration (simply adding dropout and label smoothing).Top-1 Accuracy72%71%. Faster. % Higher Accurac70%69%MultipliersADC. Faster. % Higher Accurac68%NetAdapt67%NetAdapt (BetterTraining Config.)66%65%64%7911131517Latency (ms)Fig. 7. This figure compares NetAdapt (adapting the large MobileNetV1) with themultipliers [9] and ADC [8] on a mobile GPU of Samsung Galaxy S8. Moreover, theaccuracy of the adapted networks can be further increased by up to 1.3% through usinga better training configuration (simply adding dropout and label smoothing).Adapting Large MobileNetV1 on a Mobile GPU In this experiment, weapply NetAdapt to adapt the large MobileNetV1 on a mobile GPU to show thegenerality of NetAdapt. Fig. 7 shows that NetAdapt outperforms other benchmark algorithms by up to 1.2 speed-up with higher accuracy. Due to the limitation of the SNPE tool, the layerwise latency breakdown only considers thecomputation time and does not include the latency of other operations, such asfeature map movement, which can be expensive [25]. This affects the precisionof the look-up tables used for this experiment. Moreover, we observe that thereis an approximate 6.2ms (38% of the latency of the network before applyingNetAdapt) non-reducible latency. These factors cause a smaller improvement onthe mobile GPU compared with the experiments on the mobile CPU. Moreover,when the better training configuration is applied as previously described, theaccuracy can be further increased by 1.3%.

12T.-J. Yang et al.NetworkTop-1 Accuracy (%) # of MACs ( 106 )25% MobileNetV1 (128) [9]45.1( 0)13.6(100%)46.0( 0.9)15.0(110%)MorphNet [5]46.3( 1.2)11.0(81%)NetAdapt75% MobileNetV1 (224) [9]68.8( 0)325.4(100%)69.1( 0.3)304.2(93%)ADC [8]69.1( 0.3)284.3(87%)NetAdaptLatency 14%)74.9(108%)Table 1. The comparison between NetAdapt (adapting the small or large MobileNetV1) and the three benchmark algorithms on image classification when targetingthe number of MACs. The latency numbers are measured on a mobile CPU of GooglePixel 1. We roughly match their accuracy and compare their latency.60%Top-1 Accuracy50%40%0 Iterations10k Iterations40k Iterations200k Iterations30%20%10%0%55%51015Fig. 8. The accuracy of different shortterm fine-tuning iterations when adapting the small MobileNetV1 (without longterm fine-tuning) on a mobile CPU ofGoogle Pixel 1. Zero iterations means noshort-term fine-tuning.Before LFT50%After LFT45%40%35%0Latency (ms)4.3Top-1 Accuracy60%3456789101112Latency (ms)Fig. 9. The comparison between beforeand after long-term fine-tuning whenadapting the small MobileNetV1 on a mobile CPU of Google Pixel 1. Although theshort-term fine-tuning preserves the accuracy well, the long-term fine-tuning givesthe extra 3.4% on average (from 1.8% to4.5%).Ablation StudiesImpact of Direct Metrics In this experiment, we use the indirect metric (i.e.,the number of MACs) instead of the direct metric (i.e., the latency) to guideNetAdapt to investigate the importance of using direct metrics. When computingthe number of MACs, we only consider the CONV and FC layers because batchnormalization layers can be folded into the corresponding CONV layers, and theother layers are negligibly small. Table 1 shows that NetAdapt outperforms thebenchmark algorithms with lower numbers of MACs and higher accuracy. Thisdemonstrates the effectiveness of NetAdapt. However, we also observe that thenetwork with lower numbers of MACs may not necessarily be faster. This showsthe necessity of incorporating direct measurements into the optimization flow.Impact of Short-Term Fine-Tuning Fig. 8 shows the accuracy of adaptingthe small MobileNetV1 with different short-term fine-tuning iterations (withoutlong-term fine-tuning). The accuracy rapidly drops to nearly zero if no shortterm fine-tuning is performed (i.e., zero iterations). In this low accuracy region,the algorithm picks the best network proposal solely based on noise and hence

NetAdapt13Number of FiltersInitialization (ms) Decay Rate # of Total Iterations Top-1 Accuracy (%) Latency 50.8Table 2. The influence of resource reduction tAdapt0123456789 10 11 12 13Conv2d Layer IndexFig. 10. NetAdapt and the multipliers generate different simplified networks whenadapting the small MobileNetV1 to match the latency of 25% MobileNetV1 (128).gives poor performance. After fine-tuning a network for a short amount of time(ten thousand iterations), the accuracy is always kept above 20%, which allowsthe algorit

NetAdapt automatically adapts a pretrained network to a mobile platform given a resource budget. This algorithm is guided by the direct metrics for resource consumption. NetAdapt eliminates the requirement of platform-specific knowledge by using empirical measurements to evaluate the direct metrics. At each iteration, Ne-