{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Increase the speed and reduce the memory consumption by pruning layers of models.\n", "\n", "In this tutorial, we will show you how you can prune and train a model using AutoNLU on a custom dataset. More precisely, we at first prune 50% of the layers of a model and train it afterwards to to predict reviews of the Google Play store similar to tutorial 02.\n", "\n", "Note: We recommend using a machine with an Nvidia GPU for this tutorial." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "!pip install pandas -q" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "import autonlu\n", "from autonlu import Model\n", "import pandas as pd\n", "import numpy as np\n", "import gdown\n", "from autonlu.utils import split_dataset" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "autonlu.login()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Download and prepare dataset\n", "At first, we automatically download and prepare the google play app reviews dataset. Note that this installs gdown in your pip environment." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "output_type": "stream", "name": "stderr", "text": [ "Downloading...\n", "From: https://drive.google.com/uc?id=1S6qMioqPJjyBLpLVz4gmRTnJHnjitnuV\n", "To: /home/david/Dev/deepopinion/autonlu/tutorials/.cache/data/googleplay/apps.csv\n", "100%|██████████| 134k/134k [00:00<00:00, 1.99MB/s]\n", "Downloading...\n", "From: https://drive.google.com/uc?id=1zdmewp7ayS4js4VtrJEHzAheSW-5NBZv\n", "To: /home/david/Dev/deepopinion/autonlu/tutorials/.cache/data/googleplay/reviews.csv\n", "7.17MB [00:00, 8.34MB/s]\n" ] }, { "output_type": "execute_result", "data": { "text/plain": [ " content score\n", "0 Update: After getting a response from the deve... 1\n", "1 Used it for a fair amount of time without any ... 1\n", "2 Your app sucks now!!!!! Used to be good but no... 1\n", "3 It seems OK, but very basic. Recurring tasks n... 1\n", "4 Absolutely worthless. This app runs a prohibit... 1" ], "text/html": "
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
contentscore
0Update: After getting a response from the deve...1
1Used it for a fair amount of time without any ...1
2Your app sucks now!!!!! Used to be good but no...1
3It seems OK, but very basic. Recurring tasks n...1
4Absolutely worthless. This app runs a prohibit...1
\n
" }, "metadata": {}, "execution_count": 4 } ], "source": [ "gdown.download(\"https://drive.google.com/uc?id=1S6qMioqPJjyBLpLVz4gmRTnJHnjitnuV\", \".cache/data/googleplay/\")\n", "gdown.download(\"https://drive.google.com/uc?id=1zdmewp7ayS4js4VtrJEHzAheSW-5NBZv\", \".cache/data/googleplay/\")\n", "\n", "df = pd.read_csv(\".cache/data/googleplay/reviews.csv\")\n", "df.head()[[\"content\", \"score\"]]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Great, we now downloaded the GooglePlay reviews dataset and displayed the first entries. For this tutorial, we are interested in predicting whether a review was positive or negative (score) based on the content. So lets at first convert the dataset into classes. More precisely, let's convert our 5-star review into negative, neutral, and positive." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "The dataset contains 15746 samples\npositive: Really amazing and helped me sooo much just i hope that it can be sharable by more than one person f...\nnegative: A subscription for an App that simple? Are you insane? It should be a one time payment no more, arou ...\n" ] } ], "source": [ "def to_label(score):\n", " return \"negative\" if score <= 2 else \\\n", " \"neutral\" if score == 3 else \"positive\"\n", "\n", "X = [x for x in df.content]\n", "Y = [to_label(score) for score in df.score]\n", "\n", "assert len(X) == len(Y)\n", "print(f\"The dataset contains {len(X)} samples\")\n", "X, Y, valX, valY = split_dataset(X, Y, split_at=0.1)\n", "\n", "print(f\"{Y[-1]}: {X[-1][0:100]}...\")\n", "print(f\"{valY[0]}: {valX[0][0:100]} ...\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Training and babysitting of the model\n", "Before we start to train our network, we plot the training progress within TensorBoard which is supported out-of-the-box in our AutoNLU engine. Unfortunately, the output of TensorBoard is not preserved with the static versions of the notebook, so you will have to execute it yourself to see the visualization. The train/validation split, hyperparameter selection etc. is done internally. Because of this, the training, including visualization, can easily be started with the following four lines of code:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "Model bert-base-cased loaded from Huggingface successfully.\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "Running authorization for token for functionality Analysis/Aspect-Sentiments and language None\n", "{0: 0.8285714285714286}\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "{0: 0.8285714285714286, 1: 0.8361904761904762}\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "{0: 0.8285714285714286, 1: 0.8361904761904762, 2: 0.84}\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "{0: 0.8285714285714286, 1: 0.8361904761904762, 2: 0.84, 3: 0.8393650793650793}\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "{0: 0.8285714285714286, 1: 0.8361904761904762, 2: 0.84, 3: 0.8393650793650793, 4: 0.8380952380952381}\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "{0: 0.8285714285714286, 1: 0.8361904761904762, 2: 0.84, 3: 0.8393650793650793, 4: 0.8380952380952381, 5: 0.8342857142857143}\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "{0: 0.8285714285714286, 1: 0.8361904761904762, 2: 0.84, 3: 0.8393650793650793, 4: 0.8380952380952381, 5: 0.8342857142857143, 6: 0.8361904761904762}\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "{0: 0.8285714285714286, 1: 0.8361904761904762, 2: 0.84, 3: 0.8393650793650793, 4: 0.8380952380952381, 5: 0.8342857142857143, 6: 0.8361904761904762, 7: 0.8419047619047619}\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "{0: 0.8285714285714286, 1: 0.8361904761904762, 2: 0.84, 3: 0.8393650793650793, 4: 0.8380952380952381, 5: 0.8342857142857143, 6: 0.8361904761904762, 7: 0.8419047619047619, 8: 0.8431746031746031}\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "{0: 0.8285714285714286, 1: 0.8361904761904762, 2: 0.84, 3: 0.8393650793650793, 4: 0.8380952380952381, 5: 0.8342857142857143, 6: 0.8361904761904762, 7: 0.8419047619047619, 8: 0.8431746031746031, 9: 0.8317460317460318}\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "{0: 0.8285714285714286, 1: 0.8361904761904762, 2: 0.84, 3: 0.8393650793650793, 4: 0.8380952380952381, 5: 0.8342857142857143, 6: 0.8361904761904762, 7: 0.8419047619047619, 8: 0.8431746031746031, 9: 0.8317460317460318, 10: 0.8546031746031746}\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "{0: 0.8285714285714286, 1: 0.8361904761904762, 2: 0.84, 3: 0.8393650793650793, 4: 0.8380952380952381, 5: 0.8342857142857143, 6: 0.8361904761904762, 7: 0.8419047619047619, 8: 0.8431746031746031, 9: 0.8317460317460318, 10: 0.8546031746031746, 11: 0.8450793650793651}\n", "Exception catched by logging_function: \n", "Traceback (most recent call last):\n", " File \"/home/david/Dev/deepopinion/autonlu/autonlu/core/logging_tools.py\", line 100, in wrapper_debug\n", " value = func(*args, **kwargs)\n", " File \"/home/david/Dev/deepopinion/autonlu/autonlu/core/classifier.py\", line 152, in __init__\n", " self.model, self.device, self.fp16 = self.ch.on_model_load(\n", " File \"/home/david/Dev/deepopinion/autonlu/autonlu/core/callbackhandler.py\", line 25, in wrapper_handlecallback\n", " func(self, *args, **kwargs)\n", " File \"/home/david/Dev/deepopinion/autonlu/autonlu/core/callbackhandler.py\", line 136, in on_model_load\n", " self(\"on_model_load\")\n", " File \"/home/david/Dev/deepopinion/autonlu/autonlu/core/callbackhandler.py\", line 119, in __call__\n", " raise e\n", " File \"/home/david/Dev/deepopinion/autonlu/autonlu/core/callbackhandler.py\", line 96, in __call__\n", " new = getattr(callback, funcname)(**self.state)\n", " File \"/home/david/Dev/deepopinion/autonlu/autonlu/core/logging_tools.py\", line 113, in wrapper_debug\n", " raise error\n", " File \"/home/david/Dev/deepopinion/autonlu/autonlu/core/logging_tools.py\", line 100, in wrapper_debug\n", " value = func(*args, **kwargs)\n", " File \"/home/david/Dev/deepopinion/autonlu/autonlu/core/modules/pruner.py\", line 66, in on_model_load\n", " raise PruningRangeException(f\"Given layers to prune do not exist. LayerId \\\\in [0, {num_hidden_layers-1}]\")\n", "autonlu.core.exceptions.PruningRangeException: Given layers to prune do not exist. LayerId \\in [0, 11]\n", "PRUNE 10\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "{0: 0.8241269841269842}\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "{0: 0.8241269841269842, 1: 0.8425396825396826}\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "{0: 0.8241269841269842, 1: 0.8425396825396826, 2: 0.8387301587301588}\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "{0: 0.8241269841269842, 1: 0.8425396825396826, 2: 0.8387301587301588, 3: 0.8438095238095238}\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "{0: 0.8241269841269842, 1: 0.8425396825396826, 2: 0.8387301587301588, 3: 0.8438095238095238, 4: 0.8457142857142858}\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "{0: 0.8241269841269842, 1: 0.8425396825396826, 2: 0.8387301587301588, 3: 0.8438095238095238, 4: 0.8457142857142858, 5: 0.8412698412698413}\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "{0: 0.8241269841269842, 1: 0.8425396825396826, 2: 0.8387301587301588, 3: 0.8438095238095238, 4: 0.8457142857142858, 5: 0.8412698412698413, 6: 0.8361904761904762}\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "{0: 0.8241269841269842, 1: 0.8425396825396826, 2: 0.8387301587301588, 3: 0.8438095238095238, 4: 0.8457142857142858, 5: 0.8412698412698413, 6: 0.8361904761904762, 7: 0.8507936507936508}\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "{0: 0.8241269841269842, 1: 0.8425396825396826, 2: 0.8387301587301588, 3: 0.8438095238095238, 4: 0.8457142857142858, 5: 0.8412698412698413, 6: 0.8361904761904762, 7: 0.8507936507936508, 8: 0.8603174603174604}\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "{0: 0.8241269841269842, 1: 0.8425396825396826, 2: 0.8387301587301588, 3: 0.8438095238095238, 4: 0.8457142857142858, 5: 0.8412698412698413, 6: 0.8361904761904762, 7: 0.8507936507936508, 8: 0.8603174603174604, 9: 0.8457142857142858}\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "{0: 0.8241269841269842, 1: 0.8425396825396826, 2: 0.8387301587301588, 3: 0.8438095238095238, 4: 0.8457142857142858, 5: 0.8412698412698413, 6: 0.8361904761904762, 7: 0.8507936507936508, 8: 0.8603174603174604, 9: 0.8457142857142858, 11: 0.8431746031746031}\n", "Exception catched by logging_function: \n", "Traceback (most recent call last):\n", " File \"/home/david/Dev/deepopinion/autonlu/autonlu/core/logging_tools.py\", line 100, in wrapper_debug\n", " value = func(*args, **kwargs)\n", " File \"/home/david/Dev/deepopinion/autonlu/autonlu/core/classifier.py\", line 152, in __init__\n", " self.model, self.device, self.fp16 = self.ch.on_model_load(\n", " File \"/home/david/Dev/deepopinion/autonlu/autonlu/core/callbackhandler.py\", line 25, in wrapper_handlecallback\n", " func(self, *args, **kwargs)\n", " File \"/home/david/Dev/deepopinion/autonlu/autonlu/core/callbackhandler.py\", line 136, in on_model_load\n", " self(\"on_model_load\")\n", " File \"/home/david/Dev/deepopinion/autonlu/autonlu/core/callbackhandler.py\", line 119, in __call__\n", " raise e\n", " File \"/home/david/Dev/deepopinion/autonlu/autonlu/core/callbackhandler.py\", line 96, in __call__\n", " new = getattr(callback, funcname)(**self.state)\n", " File \"/home/david/Dev/deepopinion/autonlu/autonlu/core/logging_tools.py\", line 113, in wrapper_debug\n", " raise error\n", " File \"/home/david/Dev/deepopinion/autonlu/autonlu/core/logging_tools.py\", line 100, in wrapper_debug\n", " value = func(*args, **kwargs)\n", " File \"/home/david/Dev/deepopinion/autonlu/autonlu/core/modules/pruner.py\", line 66, in on_model_load\n", " raise PruningRangeException(f\"Given layers to prune do not exist. LayerId \\\\in [0, {num_hidden_layers-1}]\")\n", "autonlu.core.exceptions.PruningRangeException: Given layers to prune do not exist. LayerId \\in [0, 11]\n", "PRUNE 8\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "{0: 0.8304761904761905}\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "{0: 0.8304761904761905, 1: 0.8323809523809523}\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "{0: 0.8304761904761905, 1: 0.8323809523809523, 2: 0.834920634920635}\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "{0: 0.8304761904761905, 1: 0.8323809523809523, 2: 0.834920634920635, 3: 0.8355555555555556}\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "{0: 0.8304761904761905, 1: 0.8323809523809523, 2: 0.834920634920635, 3: 0.8355555555555556, 4: 0.8406349206349206}\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "{0: 0.8304761904761905, 1: 0.8323809523809523, 2: 0.834920634920635, 3: 0.8355555555555556, 4: 0.8406349206349206, 5: 0.8368253968253968}\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "{0: 0.8304761904761905, 1: 0.8323809523809523, 2: 0.834920634920635, 3: 0.8355555555555556, 4: 0.8406349206349206, 5: 0.8368253968253968, 6: 0.8266666666666667}\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "{0: 0.8304761904761905, 1: 0.8323809523809523, 2: 0.834920634920635, 3: 0.8355555555555556, 4: 0.8406349206349206, 5: 0.8368253968253968, 6: 0.8266666666666667, 7: 0.8406349206349206}\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "{0: 0.8304761904761905, 1: 0.8323809523809523, 2: 0.834920634920635, 3: 0.8355555555555556, 4: 0.8406349206349206, 5: 0.8368253968253968, 6: 0.8266666666666667, 7: 0.8406349206349206, 9: 0.8361904761904762}\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "{0: 0.8304761904761905, 1: 0.8323809523809523, 2: 0.834920634920635, 3: 0.8355555555555556, 4: 0.8406349206349206, 5: 0.8368253968253968, 6: 0.8266666666666667, 7: 0.8406349206349206, 9: 0.8361904761904762, 11: 0.8444444444444444}\n", "Exception catched by logging_function: \n", "Traceback (most recent call last):\n", " File \"/home/david/Dev/deepopinion/autonlu/autonlu/core/logging_tools.py\", line 100, in wrapper_debug\n", " value = func(*args, **kwargs)\n", " File \"/home/david/Dev/deepopinion/autonlu/autonlu/core/classifier.py\", line 152, in __init__\n", " self.model, self.device, self.fp16 = self.ch.on_model_load(\n", " File \"/home/david/Dev/deepopinion/autonlu/autonlu/core/callbackhandler.py\", line 25, in wrapper_handlecallback\n", " func(self, *args, **kwargs)\n", " File \"/home/david/Dev/deepopinion/autonlu/autonlu/core/callbackhandler.py\", line 136, in on_model_load\n", " self(\"on_model_load\")\n", " File \"/home/david/Dev/deepopinion/autonlu/autonlu/core/callbackhandler.py\", line 119, in __call__\n", " raise e\n", " File \"/home/david/Dev/deepopinion/autonlu/autonlu/core/callbackhandler.py\", line 96, in __call__\n", " new = getattr(callback, funcname)(**self.state)\n", " File \"/home/david/Dev/deepopinion/autonlu/autonlu/core/logging_tools.py\", line 113, in wrapper_debug\n", " raise error\n", " File \"/home/david/Dev/deepopinion/autonlu/autonlu/core/logging_tools.py\", line 100, in wrapper_debug\n", " value = func(*args, **kwargs)\n", " File \"/home/david/Dev/deepopinion/autonlu/autonlu/core/modules/pruner.py\", line 66, in on_model_load\n", " raise PruningRangeException(f\"Given layers to prune do not exist. LayerId \\\\in [0, {num_hidden_layers-1}]\")\n", "autonlu.core.exceptions.PruningRangeException: Given layers to prune do not exist. LayerId \\in [0, 11]\n", "PRUNE 11\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "{0: 0.8304761904761905}\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "{0: 0.8304761904761905, 1: 0.846984126984127}\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "{0: 0.8304761904761905, 1: 0.846984126984127, 2: 0.8438095238095238}\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "{0: 0.8304761904761905, 1: 0.846984126984127, 2: 0.8438095238095238, 3: 0.8317460317460318}\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "{0: 0.8304761904761905, 1: 0.846984126984127, 2: 0.8438095238095238, 3: 0.8317460317460318, 4: 0.8285714285714286}\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "{0: 0.8304761904761905, 1: 0.846984126984127, 2: 0.8438095238095238, 3: 0.8317460317460318, 4: 0.8285714285714286, 5: 0.8304761904761905}\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "{0: 0.8304761904761905, 1: 0.846984126984127, 2: 0.8438095238095238, 3: 0.8317460317460318, 4: 0.8285714285714286, 5: 0.8304761904761905, 6: 0.8323809523809523}\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "{0: 0.8304761904761905, 1: 0.846984126984127, 2: 0.8438095238095238, 3: 0.8317460317460318, 4: 0.8285714285714286, 5: 0.8304761904761905, 6: 0.8323809523809523, 7: 0.8368253968253968}\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "{0: 0.8304761904761905, 1: 0.846984126984127, 2: 0.8438095238095238, 3: 0.8317460317460318, 4: 0.8285714285714286, 5: 0.8304761904761905, 6: 0.8323809523809523, 7: 0.8368253968253968, 9: 0.8419047619047619}\n", "Exception catched by logging_function: \n", "Traceback (most recent call last):\n", " File \"/home/david/Dev/deepopinion/autonlu/autonlu/core/logging_tools.py\", line 100, in wrapper_debug\n", " value = func(*args, **kwargs)\n", " File \"/home/david/Dev/deepopinion/autonlu/autonlu/core/classifier.py\", line 152, in __init__\n", " self.model, self.device, self.fp16 = self.ch.on_model_load(\n", " File \"/home/david/Dev/deepopinion/autonlu/autonlu/core/callbackhandler.py\", line 25, in wrapper_handlecallback\n", " func(self, *args, **kwargs)\n", " File \"/home/david/Dev/deepopinion/autonlu/autonlu/core/callbackhandler.py\", line 136, in on_model_load\n", " self(\"on_model_load\")\n", " File \"/home/david/Dev/deepopinion/autonlu/autonlu/core/callbackhandler.py\", line 119, in __call__\n", " raise e\n", " File \"/home/david/Dev/deepopinion/autonlu/autonlu/core/callbackhandler.py\", line 96, in __call__\n", " new = getattr(callback, funcname)(**self.state)\n", " File \"/home/david/Dev/deepopinion/autonlu/autonlu/core/logging_tools.py\", line 113, in wrapper_debug\n", " raise error\n", " File \"/home/david/Dev/deepopinion/autonlu/autonlu/core/logging_tools.py\", line 100, in wrapper_debug\n", " value = func(*args, **kwargs)\n", " File \"/home/david/Dev/deepopinion/autonlu/autonlu/core/modules/pruner.py\", line 66, in on_model_load\n", " raise PruningRangeException(f\"Given layers to prune do not exist. LayerId \\\\in [0, {num_hidden_layers-1}]\")\n", "autonlu.core.exceptions.PruningRangeException: Given layers to prune do not exist. LayerId \\in [0, 11]\n", "PRUNE 1\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00035\n", "LearningRateReporterAndChanger: Changing learning rate to 0.00010499999999999999\n", "LearningRateReporterAndChanger: Changing learning rate to 3.149999999999999e-05\n" ] } ], "source": [ "model = Model(\"bert-base-cased\")\n", "model.auto_prune(X=X, Y=Y, valX=valX, valY=valY, num_layers_to_prune=4)\n", "model.train(X, Y, valX=valX, valY=valY)\n", "model.save(\"pruned_model\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Predict new data with your trained model\n", "That's it, we trained our model without any manual hyperparameter tuning and still got an accuracy of 86.46% on this dataset. Now we can use this model to predict new sentences:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "output_type": "stream", "name": "stderr", "text": [ "Running authorization for token for functionality Analysis/Aspect-Sentiments and language None\n", "Model model/pruned_model loaded from local path successfully.\n", "['positive', 'neutral', 'negative']\n" ] } ], "source": [ "prediction_model = Model(\"pruned_model\")\n", "ret = prediction_model.predict([\n", " \"I really love this app.\", \n", " \"The app is ok but neets improvements.\", \n", " \"Its crashing all the time I can't use it.\"])\n", "print(ret)" ] }, { "source": [ "Ok now lets evaluate the accuracy and compare it against our unpruned version where we achieved 86.46%" ], "cell_type": "markdown", "metadata": {} }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "Accuracy: 0.8793650793650793\n" ] } ], "source": [ "result = prediction_model.evaluate(valX, valY)\n", "print(f\"Accuracy: {result['accuracy']}\")" ] }, { "source": [ "So we now pruned 33% i.e. decreased the memory consumption and increased the speed of the bert model but we still achieve the very high accuracy of 88% with only one additional line of code compared to tutorial-02 :)" ], "cell_type": "markdown", "metadata": {} } ], "metadata": { "@webio": { "lastCommId": null, "lastKernelId": null }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.8-final" } }, "nbformat": 4, "nbformat_minor": 2 }