mix nasty.fine_tune.pos (Nasty v0.3.0)
View SourceFine-tunes a pre-trained transformer model for POS tagging.
Usage
mix nasty.fine_tune.pos \
--model roberta_base \
--train data/en_ewt-ud-train.conllu \
--validation data/en_ewt-ud-dev.conllu \
--output models/pos_finetuned \
--epochs 3 \
--batch-size 16Options
--model- Base transformer model to fine-tune (required) Options: bert_base_cased, roberta_base, xlm_roberta_base--train- Path to training data in CoNLL-U format (required)--validation- Path to validation data (optional)--output- Output directory for fine-tuned model (default: priv/models/finetuned)--epochs- Number of training epochs (default: 3)--batch-size- Training batch size (default: 16)--learning-rate- Learning rate (default: 3e-5)--max-length- Maximum sequence length (default: 512)--eval-steps- Evaluate every N steps (default: 500)
Examples
# Quick fine-tuning with defaults
mix nasty.fine_tune.pos --model roberta_base --train data/train.conllu
# Full configuration
mix nasty.fine_tune.pos \
--model bert_base_cased \
--train data/train.conllu \
--validation data/dev.conllu \
--epochs 5 \
--batch-size 32 \
--learning-rate 0.00002 \
--output models/pos_bert_finetuned