Increase the speed and reduce the memory consumption by pruning layers of models.¶
In this tutorial, we will show you how you can prune and train a model using AutoNLU on a custom dataset. More precisely, we at first prune 50% of the layers of a model and train it afterwards to predict reviews of the Google Play store similar to tutorial 02.
Note: We recommend using a machine with an Nvidia GPU for this tutorial.
[1]:
!pip install pandas -q
[2]:
import autonlu
from autonlu import Model
import pandas as pd
import numpy as np
import gdown
from autonlu.utils import split_dataset
[2]:
autonlu.login()
Download and prepare dataset¶
At first, we automatically download and prepare the google play app reviews dataset. Note that this installs gdown in your pip environment.
[4]:
gdown.download("https://drive.google.com/uc?id=1S6qMioqPJjyBLpLVz4gmRTnJHnjitnuV", ".cache/data/googleplay/")
gdown.download("https://drive.google.com/uc?id=1zdmewp7ayS4js4VtrJEHzAheSW-5NBZv", ".cache/data/googleplay/")
df = pd.read_csv(".cache/data/googleplay/reviews.csv")
df.head()[["content", "score"]]
Downloading...
From: https://drive.google.com/uc?id=1S6qMioqPJjyBLpLVz4gmRTnJHnjitnuV
To: /home/david/Dev/deepopinion/autonlu/tutorials/.cache/data/googleplay/apps.csv
100%|██████████| 134k/134k [00:00<00:00, 1.99MB/s]
Downloading...
From: https://drive.google.com/uc?id=1zdmewp7ayS4js4VtrJEHzAheSW-5NBZv
To: /home/david/Dev/deepopinion/autonlu/tutorials/.cache/data/googleplay/reviews.csv
7.17MB [00:00, 8.34MB/s]
[4]:
content | score | |
---|---|---|
0 | Update: After getting a response from the deve... | 1 |
1 | Used it for a fair amount of time without any ... | 1 |
2 | Your app sucks now!!!!! Used to be good but no... | 1 |
3 | It seems OK, but very basic. Recurring tasks n... | 1 |
4 | Absolutely worthless. This app runs a prohibit... | 1 |
Great, we now downloaded the GooglePlay reviews dataset and displayed the first entries. For this tutorial, we are interested in predicting whether a review was positive or negative (score) based on the content. So lets at first convert the dataset into classes. More precisely, let’s convert our 5-star review into negative, neutral, and positive.
[5]:
def to_label(score):
return "negative" if score <= 2 else \
"neutral" if score == 3 else "positive"
X = [x for x in df.content]
Y = [to_label(score) for score in df.score]
assert len(X) == len(Y)
print(f"The dataset contains {len(X)} samples")
X, Y, valX, valY = split_dataset(X, Y, split_at=0.1)
print(f"{Y[-1]}: {X[-1][0:100]}...")
print(f"{valY[0]}: {valX[0][0:100]} ...")
The dataset contains 15746 samples
positive: Really amazing and helped me sooo much just i hope that it can be sharable by more than one person f...
negative: A subscription for an App that simple? Are you insane? It should be a one time payment no more, arou ...
Training and babysitting of the model¶
Before we start to train our network, we plot the training progress within TensorBoard which is supported out-of-the-box in our AutoNLU engine. Unfortunately, the output of TensorBoard is not preserved with the static versions of the notebook, so you will have to execute it yourself to see the visualization. The train/validation split, hyperparameter selection etc. is done internally. Because of this, the training, including visualization, can easily be started with the following four lines of code:
[7]:
model = Model("bert-base-cased")
model.auto_prune(X=X, Y=Y, valX=valX, valY=valY, num_layers_to_prune=4)
model.train(X, Y, valX=valX, valY=valY)
model.save("pruned_model")
Model bert-base-cased loaded from Huggingface successfully.
LearningRateReporterAndChanger: Changing learning rate to 0.00035
Running authorization for token for functionality Analysis/Aspect-Sentiments and language None
{0: 0.8285714285714286}
LearningRateReporterAndChanger: Changing learning rate to 0.00035
{0: 0.8285714285714286, 1: 0.8361904761904762}
LearningRateReporterAndChanger: Changing learning rate to 0.00035
{0: 0.8285714285714286, 1: 0.8361904761904762, 2: 0.84}
LearningRateReporterAndChanger: Changing learning rate to 0.00035
{0: 0.8285714285714286, 1: 0.8361904761904762, 2: 0.84, 3: 0.8393650793650793}
LearningRateReporterAndChanger: Changing learning rate to 0.00035
{0: 0.8285714285714286, 1: 0.8361904761904762, 2: 0.84, 3: 0.8393650793650793, 4: 0.8380952380952381}
LearningRateReporterAndChanger: Changing learning rate to 0.00035
{0: 0.8285714285714286, 1: 0.8361904761904762, 2: 0.84, 3: 0.8393650793650793, 4: 0.8380952380952381, 5: 0.8342857142857143}
LearningRateReporterAndChanger: Changing learning rate to 0.00035
{0: 0.8285714285714286, 1: 0.8361904761904762, 2: 0.84, 3: 0.8393650793650793, 4: 0.8380952380952381, 5: 0.8342857142857143, 6: 0.8361904761904762}
LearningRateReporterAndChanger: Changing learning rate to 0.00035
{0: 0.8285714285714286, 1: 0.8361904761904762, 2: 0.84, 3: 0.8393650793650793, 4: 0.8380952380952381, 5: 0.8342857142857143, 6: 0.8361904761904762, 7: 0.8419047619047619}
LearningRateReporterAndChanger: Changing learning rate to 0.00035
{0: 0.8285714285714286, 1: 0.8361904761904762, 2: 0.84, 3: 0.8393650793650793, 4: 0.8380952380952381, 5: 0.8342857142857143, 6: 0.8361904761904762, 7: 0.8419047619047619, 8: 0.8431746031746031}
LearningRateReporterAndChanger: Changing learning rate to 0.00035
{0: 0.8285714285714286, 1: 0.8361904761904762, 2: 0.84, 3: 0.8393650793650793, 4: 0.8380952380952381, 5: 0.8342857142857143, 6: 0.8361904761904762, 7: 0.8419047619047619, 8: 0.8431746031746031, 9: 0.8317460317460318}
LearningRateReporterAndChanger: Changing learning rate to 0.00035
{0: 0.8285714285714286, 1: 0.8361904761904762, 2: 0.84, 3: 0.8393650793650793, 4: 0.8380952380952381, 5: 0.8342857142857143, 6: 0.8361904761904762, 7: 0.8419047619047619, 8: 0.8431746031746031, 9: 0.8317460317460318, 10: 0.8546031746031746}
LearningRateReporterAndChanger: Changing learning rate to 0.00035
{0: 0.8285714285714286, 1: 0.8361904761904762, 2: 0.84, 3: 0.8393650793650793, 4: 0.8380952380952381, 5: 0.8342857142857143, 6: 0.8361904761904762, 7: 0.8419047619047619, 8: 0.8431746031746031, 9: 0.8317460317460318, 10: 0.8546031746031746, 11: 0.8450793650793651}
Exception catched by logging_function:
Traceback (most recent call last):
File "/home/david/Dev/deepopinion/autonlu/autonlu/core/logging_tools.py", line 100, in wrapper_debug
value = func(*args, **kwargs)
File "/home/david/Dev/deepopinion/autonlu/autonlu/core/classifier.py", line 152, in __init__
self.model, self.device, self.fp16 = self.ch.on_model_load(
File "/home/david/Dev/deepopinion/autonlu/autonlu/core/callbackhandler.py", line 25, in wrapper_handlecallback
func(self, *args, **kwargs)
File "/home/david/Dev/deepopinion/autonlu/autonlu/core/callbackhandler.py", line 136, in on_model_load
self("on_model_load")
File "/home/david/Dev/deepopinion/autonlu/autonlu/core/callbackhandler.py", line 119, in __call__
raise e
File "/home/david/Dev/deepopinion/autonlu/autonlu/core/callbackhandler.py", line 96, in __call__
new = getattr(callback, funcname)(**self.state)
File "/home/david/Dev/deepopinion/autonlu/autonlu/core/logging_tools.py", line 113, in wrapper_debug
raise error
File "/home/david/Dev/deepopinion/autonlu/autonlu/core/logging_tools.py", line 100, in wrapper_debug
value = func(*args, **kwargs)
File "/home/david/Dev/deepopinion/autonlu/autonlu/core/modules/pruner.py", line 66, in on_model_load
raise PruningRangeException(f"Given layers to prune do not exist. LayerId \\in [0, {num_hidden_layers-1}]")
autonlu.core.exceptions.PruningRangeException: Given layers to prune do not exist. LayerId \in [0, 11]
PRUNE 10
LearningRateReporterAndChanger: Changing learning rate to 0.00035
{0: 0.8241269841269842}
LearningRateReporterAndChanger: Changing learning rate to 0.00035
{0: 0.8241269841269842, 1: 0.8425396825396826}
LearningRateReporterAndChanger: Changing learning rate to 0.00035
{0: 0.8241269841269842, 1: 0.8425396825396826, 2: 0.8387301587301588}
LearningRateReporterAndChanger: Changing learning rate to 0.00035
{0: 0.8241269841269842, 1: 0.8425396825396826, 2: 0.8387301587301588, 3: 0.8438095238095238}
LearningRateReporterAndChanger: Changing learning rate to 0.00035
{0: 0.8241269841269842, 1: 0.8425396825396826, 2: 0.8387301587301588, 3: 0.8438095238095238, 4: 0.8457142857142858}
LearningRateReporterAndChanger: Changing learning rate to 0.00035
{0: 0.8241269841269842, 1: 0.8425396825396826, 2: 0.8387301587301588, 3: 0.8438095238095238, 4: 0.8457142857142858, 5: 0.8412698412698413}
LearningRateReporterAndChanger: Changing learning rate to 0.00035
{0: 0.8241269841269842, 1: 0.8425396825396826, 2: 0.8387301587301588, 3: 0.8438095238095238, 4: 0.8457142857142858, 5: 0.8412698412698413, 6: 0.8361904761904762}
LearningRateReporterAndChanger: Changing learning rate to 0.00035
{0: 0.8241269841269842, 1: 0.8425396825396826, 2: 0.8387301587301588, 3: 0.8438095238095238, 4: 0.8457142857142858, 5: 0.8412698412698413, 6: 0.8361904761904762, 7: 0.8507936507936508}
LearningRateReporterAndChanger: Changing learning rate to 0.00035
{0: 0.8241269841269842, 1: 0.8425396825396826, 2: 0.8387301587301588, 3: 0.8438095238095238, 4: 0.8457142857142858, 5: 0.8412698412698413, 6: 0.8361904761904762, 7: 0.8507936507936508, 8: 0.8603174603174604}
LearningRateReporterAndChanger: Changing learning rate to 0.00035
{0: 0.8241269841269842, 1: 0.8425396825396826, 2: 0.8387301587301588, 3: 0.8438095238095238, 4: 0.8457142857142858, 5: 0.8412698412698413, 6: 0.8361904761904762, 7: 0.8507936507936508, 8: 0.8603174603174604, 9: 0.8457142857142858}
LearningRateReporterAndChanger: Changing learning rate to 0.00035
{0: 0.8241269841269842, 1: 0.8425396825396826, 2: 0.8387301587301588, 3: 0.8438095238095238, 4: 0.8457142857142858, 5: 0.8412698412698413, 6: 0.8361904761904762, 7: 0.8507936507936508, 8: 0.8603174603174604, 9: 0.8457142857142858, 11: 0.8431746031746031}
Exception catched by logging_function:
Traceback (most recent call last):
File "/home/david/Dev/deepopinion/autonlu/autonlu/core/logging_tools.py", line 100, in wrapper_debug
value = func(*args, **kwargs)
File "/home/david/Dev/deepopinion/autonlu/autonlu/core/classifier.py", line 152, in __init__
self.model, self.device, self.fp16 = self.ch.on_model_load(
File "/home/david/Dev/deepopinion/autonlu/autonlu/core/callbackhandler.py", line 25, in wrapper_handlecallback
func(self, *args, **kwargs)
File "/home/david/Dev/deepopinion/autonlu/autonlu/core/callbackhandler.py", line 136, in on_model_load
self("on_model_load")
File "/home/david/Dev/deepopinion/autonlu/autonlu/core/callbackhandler.py", line 119, in __call__
raise e
File "/home/david/Dev/deepopinion/autonlu/autonlu/core/callbackhandler.py", line 96, in __call__
new = getattr(callback, funcname)(**self.state)
File "/home/david/Dev/deepopinion/autonlu/autonlu/core/logging_tools.py", line 113, in wrapper_debug
raise error
File "/home/david/Dev/deepopinion/autonlu/autonlu/core/logging_tools.py", line 100, in wrapper_debug
value = func(*args, **kwargs)
File "/home/david/Dev/deepopinion/autonlu/autonlu/core/modules/pruner.py", line 66, in on_model_load
raise PruningRangeException(f"Given layers to prune do not exist. LayerId \\in [0, {num_hidden_layers-1}]")
autonlu.core.exceptions.PruningRangeException: Given layers to prune do not exist. LayerId \in [0, 11]
PRUNE 8
LearningRateReporterAndChanger: Changing learning rate to 0.00035
{0: 0.8304761904761905}
LearningRateReporterAndChanger: Changing learning rate to 0.00035
{0: 0.8304761904761905, 1: 0.8323809523809523}
LearningRateReporterAndChanger: Changing learning rate to 0.00035
{0: 0.8304761904761905, 1: 0.8323809523809523, 2: 0.834920634920635}
LearningRateReporterAndChanger: Changing learning rate to 0.00035
{0: 0.8304761904761905, 1: 0.8323809523809523, 2: 0.834920634920635, 3: 0.8355555555555556}
LearningRateReporterAndChanger: Changing learning rate to 0.00035
{0: 0.8304761904761905, 1: 0.8323809523809523, 2: 0.834920634920635, 3: 0.8355555555555556, 4: 0.8406349206349206}
LearningRateReporterAndChanger: Changing learning rate to 0.00035
{0: 0.8304761904761905, 1: 0.8323809523809523, 2: 0.834920634920635, 3: 0.8355555555555556, 4: 0.8406349206349206, 5: 0.8368253968253968}
LearningRateReporterAndChanger: Changing learning rate to 0.00035
{0: 0.8304761904761905, 1: 0.8323809523809523, 2: 0.834920634920635, 3: 0.8355555555555556, 4: 0.8406349206349206, 5: 0.8368253968253968, 6: 0.8266666666666667}
LearningRateReporterAndChanger: Changing learning rate to 0.00035
{0: 0.8304761904761905, 1: 0.8323809523809523, 2: 0.834920634920635, 3: 0.8355555555555556, 4: 0.8406349206349206, 5: 0.8368253968253968, 6: 0.8266666666666667, 7: 0.8406349206349206}
LearningRateReporterAndChanger: Changing learning rate to 0.00035
{0: 0.8304761904761905, 1: 0.8323809523809523, 2: 0.834920634920635, 3: 0.8355555555555556, 4: 0.8406349206349206, 5: 0.8368253968253968, 6: 0.8266666666666667, 7: 0.8406349206349206, 9: 0.8361904761904762}
LearningRateReporterAndChanger: Changing learning rate to 0.00035
{0: 0.8304761904761905, 1: 0.8323809523809523, 2: 0.834920634920635, 3: 0.8355555555555556, 4: 0.8406349206349206, 5: 0.8368253968253968, 6: 0.8266666666666667, 7: 0.8406349206349206, 9: 0.8361904761904762, 11: 0.8444444444444444}
Exception catched by logging_function:
Traceback (most recent call last):
File "/home/david/Dev/deepopinion/autonlu/autonlu/core/logging_tools.py", line 100, in wrapper_debug
value = func(*args, **kwargs)
File "/home/david/Dev/deepopinion/autonlu/autonlu/core/classifier.py", line 152, in __init__
self.model, self.device, self.fp16 = self.ch.on_model_load(
File "/home/david/Dev/deepopinion/autonlu/autonlu/core/callbackhandler.py", line 25, in wrapper_handlecallback
func(self, *args, **kwargs)
File "/home/david/Dev/deepopinion/autonlu/autonlu/core/callbackhandler.py", line 136, in on_model_load
self("on_model_load")
File "/home/david/Dev/deepopinion/autonlu/autonlu/core/callbackhandler.py", line 119, in __call__
raise e
File "/home/david/Dev/deepopinion/autonlu/autonlu/core/callbackhandler.py", line 96, in __call__
new = getattr(callback, funcname)(**self.state)
File "/home/david/Dev/deepopinion/autonlu/autonlu/core/logging_tools.py", line 113, in wrapper_debug
raise error
File "/home/david/Dev/deepopinion/autonlu/autonlu/core/logging_tools.py", line 100, in wrapper_debug
value = func(*args, **kwargs)
File "/home/david/Dev/deepopinion/autonlu/autonlu/core/modules/pruner.py", line 66, in on_model_load
raise PruningRangeException(f"Given layers to prune do not exist. LayerId \\in [0, {num_hidden_layers-1}]")
autonlu.core.exceptions.PruningRangeException: Given layers to prune do not exist. LayerId \in [0, 11]
PRUNE 11
LearningRateReporterAndChanger: Changing learning rate to 0.00035
{0: 0.8304761904761905}
LearningRateReporterAndChanger: Changing learning rate to 0.00035
{0: 0.8304761904761905, 1: 0.846984126984127}
LearningRateReporterAndChanger: Changing learning rate to 0.00035
{0: 0.8304761904761905, 1: 0.846984126984127, 2: 0.8438095238095238}
LearningRateReporterAndChanger: Changing learning rate to 0.00035
{0: 0.8304761904761905, 1: 0.846984126984127, 2: 0.8438095238095238, 3: 0.8317460317460318}
LearningRateReporterAndChanger: Changing learning rate to 0.00035
{0: 0.8304761904761905, 1: 0.846984126984127, 2: 0.8438095238095238, 3: 0.8317460317460318, 4: 0.8285714285714286}
LearningRateReporterAndChanger: Changing learning rate to 0.00035
{0: 0.8304761904761905, 1: 0.846984126984127, 2: 0.8438095238095238, 3: 0.8317460317460318, 4: 0.8285714285714286, 5: 0.8304761904761905}
LearningRateReporterAndChanger: Changing learning rate to 0.00035
{0: 0.8304761904761905, 1: 0.846984126984127, 2: 0.8438095238095238, 3: 0.8317460317460318, 4: 0.8285714285714286, 5: 0.8304761904761905, 6: 0.8323809523809523}
LearningRateReporterAndChanger: Changing learning rate to 0.00035
{0: 0.8304761904761905, 1: 0.846984126984127, 2: 0.8438095238095238, 3: 0.8317460317460318, 4: 0.8285714285714286, 5: 0.8304761904761905, 6: 0.8323809523809523, 7: 0.8368253968253968}
LearningRateReporterAndChanger: Changing learning rate to 0.00035
{0: 0.8304761904761905, 1: 0.846984126984127, 2: 0.8438095238095238, 3: 0.8317460317460318, 4: 0.8285714285714286, 5: 0.8304761904761905, 6: 0.8323809523809523, 7: 0.8368253968253968, 9: 0.8419047619047619}
Exception catched by logging_function:
Traceback (most recent call last):
File "/home/david/Dev/deepopinion/autonlu/autonlu/core/logging_tools.py", line 100, in wrapper_debug
value = func(*args, **kwargs)
File "/home/david/Dev/deepopinion/autonlu/autonlu/core/classifier.py", line 152, in __init__
self.model, self.device, self.fp16 = self.ch.on_model_load(
File "/home/david/Dev/deepopinion/autonlu/autonlu/core/callbackhandler.py", line 25, in wrapper_handlecallback
func(self, *args, **kwargs)
File "/home/david/Dev/deepopinion/autonlu/autonlu/core/callbackhandler.py", line 136, in on_model_load
self("on_model_load")
File "/home/david/Dev/deepopinion/autonlu/autonlu/core/callbackhandler.py", line 119, in __call__
raise e
File "/home/david/Dev/deepopinion/autonlu/autonlu/core/callbackhandler.py", line 96, in __call__
new = getattr(callback, funcname)(**self.state)
File "/home/david/Dev/deepopinion/autonlu/autonlu/core/logging_tools.py", line 113, in wrapper_debug
raise error
File "/home/david/Dev/deepopinion/autonlu/autonlu/core/logging_tools.py", line 100, in wrapper_debug
value = func(*args, **kwargs)
File "/home/david/Dev/deepopinion/autonlu/autonlu/core/modules/pruner.py", line 66, in on_model_load
raise PruningRangeException(f"Given layers to prune do not exist. LayerId \\in [0, {num_hidden_layers-1}]")
autonlu.core.exceptions.PruningRangeException: Given layers to prune do not exist. LayerId \in [0, 11]
PRUNE 1
LearningRateReporterAndChanger: Changing learning rate to 0.00035
LearningRateReporterAndChanger: Changing learning rate to 0.00010499999999999999
LearningRateReporterAndChanger: Changing learning rate to 3.149999999999999e-05
Predict new data with your trained model¶
That’s it, we trained our model without any manual hyperparameter tuning and still got an accuracy of 86.46% on this dataset. Now we can use this model to predict new sentences:
[8]:
prediction_model = Model("pruned_model")
ret = prediction_model.predict([
"I really love this app.",
"The app is ok but needs improvements.",
"Its crashing all the time I can't use it."])
print(ret)
Running authorization for token for functionality Analysis/Aspect-Sentiments and language None
Model model/pruned_model loaded from local path successfully.
['positive', 'neutral', 'negative']
Ok now let’s evaluate the accuracy and compare it against our unpruned version where we achieved 86.46%
[11]:
result = prediction_model.evaluate(valX, valY)
print(f"Accuracy: {result['accuracy']}")
Accuracy: 0.8793650793650793
So we now pruned 33% i.e. decreased the memory consumption and increased the speed of the bert model, but we still achieve the very high accuracy of 88% with only one additional line of code compared to tutorial-02 :)