AutoNLU Logo
1.6.0

Getting Started

  • Installation
  • Getting Started
  • Why you should use AutoNLU

Tutorials

  • Predict sentences using the pretrained DeepOpinion ABSA model
  • Train a model to label reviews from the GooglePlay store
  • Train a model to automatically detect tags of stackoverflow questions
  • Train a model to detect opinions on different categories (ABSA) for laptop reviews
  • Improve our GooglePlay store example with manual labeling
  • Fine-tune your model on unsupervised data to improve the performance
  • Increase the speed and reduce the memory consumption by pruning layers of models.
  • Reduce inference time by distilling your model
  • Perform an Automatic Hyperparameter Search
    • Download data and prepare a dataset
    • Searching for the best model
      • Optuna
  • Token classification tasks
  • Question answering
  • Data Cleaning
  • Topic modeling - finding topics for a dataset

Concepts

  • Overview of the Software Architecture

Known Issues

  • Known Issues

Changelog

  • Changelog

API Reference

  • Model
  • AutoMl
  • Data Cleaning
  • Topic Modelling
  • Studio Interface
  • Utils
  • SampleHash
  • LMTeacher
  • DataDependence
  • Translation
  • SimpleModel
  • DocumentModel
AutoNLU
  • »
  • Perform an Automatic Hyperparameter Search
  • View page source

Perform an Automatic Hyperparameter Search¶

In this tutorial, we use AutoNLU to detect all categories and the corresponding opinions of laptop reviews, similar to tutorial 04. But in this tutorial, we will show how AutoNLU can automatically search for the best hyperparameters (called automatic HPO) to produce the highest performance models.

[1]:
%load_ext tensorboard

!pip install xmltodict -q
[2]:
import autonlu
from autonlu import AutoMl
from autonlu.automl import Categorical, FloatRange, IntRange
import pandas as pd
import numpy as np

import requests
import xmltodict
[3]:
autonlu.login()
User name/Email: admin
Password: ········

Download data and prepare a dataset¶

We start by downloading the SemEval laptop dataset and convert the XML file into lists as needed by AutoNLU

[4]:
def download_data():
    url = "https://raw.githubusercontent.com/davidsbatista/Aspect-Based-Sentiment-Analysis/master/datasets/ABSA-SemEval2015/ABSA-15_Laptops_Train_Data.xml"
    response = requests.get(url)
    data = xmltodict.parse(response.content)
    return data
[5]:
def parse_data(data):
    X, Y = [], []

    for review in data["Reviews"]["Review"]:
        sentences = review["sentences"]["sentence"]
        sentences = [sentences] if "text" in sentences else sentences

        for entry in sentences:
            text = entry["text"]

            aspects = []
            if "Opinions" in entry:
                aspect_list = entry["Opinions"]["Opinion"]
                aspect_list = [aspect_list] if "@category" in aspect_list else aspect_list
                for aspect in aspect_list:
                    category = aspect["@category"]
                    sentiment = aspect["@polarity"]
                    aspects.append([category, sentiment])

            X.append(text)
            Y.append(aspects)
    return X, Y
[9]:
data = download_data()
X, Y = parse_data(data)

print(X[5])
print(Y[5])
This computer is really fast and I'm shocked as to how easy it is to get used to...
[['LAPTOP#OPERATION_PERFORMANCE', 'positive'], ['LAPTOP#USABILITY', 'positive']]

Searching for the best model¶

Now we want to find the best model over a certain set of options.

We specify that the model to be used should be roberta-base, bert-base-uncased and albert-base-v2 in their OMI variant. In addition, we specify that we want to search for a good learning rate in a range of 1e-6 to 1e-3.

We use the model_arguments to set the standard_label to "NONE" (something we would usually specify in the constructor of the Model class).

Since we want to speed up the search for this tutorial, we will see how far we can get by using only 1000 optimization steps. Therefore, we deactivate early stopping and set nb_opti_steps to 1000 (both options that we would usually give as arguments to the .train method of Model)

Then we load the dataset and start a hyperparameter optimization for 10 minutes

[12]:
automl = AutoMl("tutorial_09",
               hyperparameters=[
                   Categorical("model_folder", choices=["roberta-base#omi", "bert-base-uncased#omi", "albert-base-v2#omi"]),
                   FloatRange("learning_rate", low=1e-6, high=1e-3, log=True)
               ],
               model_arguments={"standard_label": "NONE"},
               train_arguments={
                   "do_early_stopping": False,
                   "nb_opti_steps": 1000
               })
automl.load_dataset(X, Y)
model = automl.create(timeout=10*60, verbose=True)
[I 2022-02-14 14:08:46,424] A new study created in memory with name: tutorial_09
/home/paethon/git/py39env/lib/python3.9/site-packages/optuna/progress_bar.py:47: ExperimentalWarning: Progress bar is experimental (supported from v1.2.0). The interface can change in the future.
  self._init_valid()
/home/paethon/git/py39env/lib/python3.9/site-packages/sklearn/metrics/_classification.py:1248: UndefinedMetricWarning: Precision is ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.
  _warn_prf(average, modifier, msg_start, len(result))
[I 2022-02-14 14:10:20,007] Trial 0 finished with value: 0.9897828863346104 and parameters: {'model_folder': 'bert-base-uncased#omi', 'learning_rate': 6.251373574521755e-05}. Best is trial 0 with value: 0.9897828863346104.
[I 2022-02-14 14:11:56,406] Trial 1 finished with value: 0.9928338299985809 and parameters: {'model_folder': 'roberta-base#omi', 'learning_rate': 0.0003967605077052988}. Best is trial 1 with value: 0.9928338299985809.
[I 2022-02-14 14:13:31,678] Trial 2 finished with value: 0.9908471690080886 and parameters: {'model_folder': 'bert-base-uncased#omi', 'learning_rate': 0.0008123245085588687}. Best is trial 1 with value: 0.9928338299985809.
[I 2022-02-14 14:13:49,994] Trial 3 pruned.
[I 2022-02-14 14:14:08,367] Trial 4 pruned.
[I 2022-02-14 14:14:26,246] Trial 5 pruned.
/home/paethon/git/py39env/lib/python3.9/site-packages/sklearn/metrics/_classification.py:1248: UndefinedMetricWarning: Precision is ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.
  _warn_prf(average, modifier, msg_start, len(result))
[I 2022-02-14 14:16:00,167] Trial 6 finished with value: 0.9891443167305236 and parameters: {'model_folder': 'bert-base-uncased#omi', 'learning_rate': 3.489018845491386e-05}. Best is trial 1 with value: 0.9928338299985809.
[I 2022-02-14 14:16:18,216] Trial 7 pruned.
[I 2022-02-14 14:17:47,187] Trial 8 finished with value: 0.9907762168298567 and parameters: {'model_folder': 'albert-base-v2#omi', 'learning_rate': 0.0002661901888489054}. Best is trial 1 with value: 0.9928338299985809.
[I 2022-02-14 14:18:01,725] Trial 9 pruned.
[I 2022-02-14 14:18:16,152] Trial 10 pruned.
[I 2022-02-14 14:18:35,808] Trial 11 pruned.
[I 2022-02-14 14:18:51,469] Trial 12 pruned.
Study statistics:
  Number of finished trials:  13
  Number of pruned trials:  8
  Number of complete trials:  5
Best trial:
  Value:  0.9928338299985809
  Params:
    model_folder: roberta-base#omi
    learning_rate: 0.0003967605077052988

model now contains the best model we could find, and you can also see the hyperparameters that were used to train this model. So let’s test the model qualitatively with a few samples:

[15]:
questions = [
    "The device is great!",
    "The battery live is bad, but they wont replace it!",
    "Wow, this laptop has an incredible long battery life and blazing speed."
]

tags = model.predict(questions)

for i, q in enumerate(questions):
    print(f"{q} - {tags[i]}\n")

The device is great! - [['LAPTOP#GENERAL', 'positive']]

The battery live is bad, but they wont replace it! - [['BATTERY#OPERATION_PERFORMANCE', 'negative'], ['BATTERY#QUALITY', 'negative'], ['SUPPORT#QUALITY', 'negative']]

Wow, this laptop has an incredible long battery life and blazing speed. - [['BATTERY#OPERATION_PERFORMANCE', 'positive'], ['LAPTOP#OPERATION_PERFORMANCE', 'positive']]

Seems like even with only 1000 optimization steps the trained model is doing really well!

Optuna¶

Internally, AutoNLU uses https://optuna.org/ with custom designed pruners. The attribute .study contains an Optuna study and can for example be used to visualize the hyperparameter search See here for a tutorial

[16]:
import optuna
[17]:
optuna.visualization.plot_optimization_history(automl.study)
[19]:
optuna.visualization.plot_intermediate_values(automl.study)
[21]:
optuna.visualization.plot_contour(automl.study)
[ ]:

Next Previous

© Copyright 2021, DeepOpinion.ai

Built with Sphinx using a theme provided by Read the Docs.