BERT Language Model Inference System
(Redirected from BERT LM Inference System)
Jump to navigation
Jump to search
A BERT Language Model Inference System is a transformer-based language model inference system that references a BERT-based LM (which uses a BERT architecture).
- Context:
- …
- Example(s):
- …
- Counter-Example(s):
- See: BERT Model Training System.
References
2018
- https://github.com/google-research/bert#prediction-from-classifier
- QUOTE: Once you have trained your classifier you can use it in inference mode by using the
--do_predict=true
command. You need to have a file namedtest.tsv
in the input folder. Output will be created in file calledtest_results.tsv
in the output folder. Each line will contain output for each sample, columns are the class probabilities.
- QUOTE: Once you have trained your classifier you can use it in inference mode by using the
export BERT_BASE_DIR=/path/to/bert/uncased_L-12_H-768_A-12 export GLUE_DIR=/path/to/glue export TRAINED_CLASSIFIER=/path/to/fine/tuned/classifier
python run_classifier.py \ --task_name=MRPC \ --do_predict=true \ --data_dir=$GLUE_DIR/MRPC \ --vocab_file=$BERT_BASE_DIR/vocab.txt \ --bert_config_file=$BERT_BASE_DIR/bert_config.json \ --init_checkpoint=$TRAINED_CLASSIFIER \ --max_seq_length=128 \ --output_dir=/tmp/mrpc_output/