Multi-Class Classification
The goal of multi-class classification is to classify a text sequence into one of n
classes. A transformer-based multi-class text classification model typically consists of a transformer model with a classification layer on top of it. The classification layer will have n
output neurons, corresponding to each class.
The minimal start given below uses a n
value of 3
. You can change n
by changing the num_labels
parameter.
Tip: Refer to ClassificationModel for details on configuring a classification model.
Minimal Start
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
from simpletransformers.classification import ClassificationModel, ClassificationArgs
import pandas as pd
import logging
logging.basicConfig(level=logging.INFO)
transformers_logger = logging.getLogger("transformers")
transformers_logger.setLevel(logging.WARNING)
# Preparing train data
train_data = [
["Aragorn was the heir of Isildur", 1],
["Frodo was the heir of Isildur", 0],
["Pippin is stronger than Merry", 2],
]
train_df = pd.DataFrame(train_data)
train_df.columns = ["text", "labels"]
# Preparing eval data
eval_data = [
["Aragorn was the heir of Elendil", 1],
["Sam was the heir of Isildur", 0],
["Merrry is stronger than Pippin", 2],
]
eval_df = pd.DataFrame(eval_data)
eval_df.columns = ["text", "labels"]
# Optional model configuration
model_args = ClassificationArgs(num_train_epochs=1)
# Create a ClassificationModel
model = ClassificationModel(
'bert',
'bert-base-cased',
num_labels=3,
args=model_args
)
# Train the model
model.train_model(train_df)
# Evaluate the model
result, model_outputs, wrong_predictions = model.eval_model(eval_df)
# Make predictions with the model
predictions, raw_outputs = model.predict(["Sam was a Wizard"])