{ "cells": [ { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "tX9nDQnr8AzT" }, "source": [ "\n", "\n", "# BERT Question Answering in TensorFlow with Mixed Precision" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "kL-6-WT78AzR" }, "source": [ "Copyright 2021 NVIDIA Corporation. All Rights Reserved.\n", "\n", "Licensed under the Apache License, Version 2.0 (the \"License\");\n", "you may not use this file except in compliance with the License.\n", "You may obtain a copy of the License at\n", "\n", " http://www.apache.org/licenses/LICENSE-2.0\n", "\n", "Unless required by applicable law or agreed to in writing, software\n", "distributed under the License is distributed on an \"AS IS\" BASIS,\n", "WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "See the License for the specific language governing permissions and\n", "limitations under the License." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 306 }, "colab_type": "code", "id": "FOa47jxd80bS", "outputId": "4c1db5fb-0dd8-45b3-d00b-bccfb1941e78" }, "outputs": [], "source": [ "!nvidia-smi" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "Loy_jvmr8AzT" }, "source": [ "## 1. Overview\n", "\n", "Bidirectional Embedding Representations from Transformers (BERT), is a method of pre-training language representations which obtains state-of-the-art results on a wide array of Natural Language Processing (NLP) tasks. \n", "\n", "The original paper can be found here: https://arxiv.org/abs/1810.04805.\n", "\n", "NVIDIA's BERT is an optimized version of Google's official implementation, leveraging mixed precision arithmetic and tensor cores on V100 GPUS for faster training times while maintaining target accuracy." ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "BXp2mMCx8AzU" }, "source": [ "### Learning objectives\n", "\n", "This notebook demonstrates:\n", "- Inference on Question Answering (QA) task with BERT Large model\n", "- The use/download of fine-tuned NVIDIA BERT models from [NGC](https://ngc.nvidia.com)\n", "- Use of Mixed Precision models for Inference" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "xLlJiTQN8AzV" }, "source": [ "## 2. Setup" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "oiQ5qvJD8Azm" }, "source": [ "### Pre-Trained NVIDIA BERT TensorFlow Models on NGC\n", "\n", "\n", "\n", "We will be using the following configuration of BERT in this example:\n", "\n", "| **Model** | **Hidden layers** | **Hidden unit size** | **Attention heads** | **Feedforward filter size** | **Max sequence length** | **Parameters** |\n", "|:---------:|:----------:|:----:|:---:|:--------:|:---:|:----:|\n", "|BERTLARGE|24 encoder|1024| 16|4 x 1024|512|330M|\n", "\n", "**To do so, we will take advantage of the pre-trained models available on the [NGC Model Registry](https://ngc.nvidia.com/catalog/models).**\n", "\n", "Among the many configurations available we will download one of these two:\n", "\n", " - **bert_tf_ckpt_large_qa_squad2_amp_384**\n", "\n", "which are trained on the [SQuaD 2.0 Dataset](https://rajpurkar.github.io/SQuAD-explorer/)." ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "5iJR47XD8Azg" }, "source": [ "We can choose the mixed precision model (which takes much less time to train than the fp32 version) without losing accuracy, with the following flag: " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "wT8mFmG51eUt" }, "outputs": [], "source": [ "use_mixed_precision_model = True" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "root_dir=\"/scratch/ws/1//bert/\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# bert_tf_ckpt_large_qa_squad2_amp_384\n", "DATA_DIR_FT = root_dir+'data/finetuned_large_model_SQUAD2.0'\n", "!mkdir -p $DATA_DIR_FT\n", " \n", "!wget --content-disposition -O $DATA_DIR_FT/bert_tf_ckpt_large_qa_squad2_amp_384_19.03.1.zip \\\n", "https://api.ngc.nvidia.com/v2/models/nvidia/bert_tf_ckpt_large_qa_squad2_amp_384/versions/19.03.1/zip \\\n", "&& unzip -n -d $DATA_DIR_FT/ $DATA_DIR_FT/bert_tf_ckpt_large_qa_squad2_amp_384_19.03.1.zip \\\n", "&& rm -rf $DATA_DIR_FT/bert_tf_ckpt_large_qa_squad2_amp_384_19.03.1.zip" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "BuE_gCBUp6uD" }, "source": [ "### NGC Model Scripts\n", "\n", "While we're at it, we'll also pull down some BERT helper scripts from the [NGC Model Scripts Registry](https://ngc.nvidia.com/catalog/model-scripts/nvidia:bert_for_tensorflow)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "kavDaBXpqd7T" }, "outputs": [], "source": [ "# Download BERT helper scripts\n", "!wget -nc --show-progress -O bert_scripts.zip \\\n", " https://api.ngc.nvidia.com/v2/recipes/nvidia/bert_for_tensorflow/versions/1/zip\n", "!mkdir -p $root_dir\n", "!unzip -n -d $root_dir bert_scripts.zip" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### BERT Config" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "aEs0P1C_RPIi" }, "outputs": [], "source": [ "# Download BERT vocab file\n", "!mkdir -p $root_dir/config.qa\n", "!wget -nc https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt \\\n", " -O $root_dir/config.qa/vocab.txt" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "MO2tAJ5TRRUB" }, "outputs": [], "source": [ "%%writefile $root_dir/config.qa/bert_config.json\n", "{\n", " \"attention_probs_dropout_prob\": 0.1,\n", " \"hidden_act\": \"gelu\",\n", " \"hidden_dropout_prob\": 0.1,\n", " \"hidden_size\": 1024,\n", " \"initializer_range\": 0.02,\n", " \"intermediate_size\": 4096,\n", " \"max_position_embeddings\": 512,\n", " \"num_attention_heads\": 16,\n", " \"num_hidden_layers\": 24,\n", " \"type_vocab_size\": 2,\n", " \"vocab_size\": 30522\n", "}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Helper Functions" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Create dynamic JSON files based on user inputs\n", "def write_input_file(context, qinputs, predict_file):\n", " # Remove quotes and new lines from text for valid JSON\n", " context = context.replace('\"', '').replace('\\n', '')\n", " # Create JSON dict to write\n", " json_dict = {\n", " \"data\": [\n", " {\n", " \"title\": \"BERT QA\",\n", " \"paragraphs\": [\n", " {\n", " \"context\": context,\n", " \"qas\": qinputs\n", " }\n", " ]\n", " }\n", " ]\n", " }\n", " # Write JSON to input file\n", " with open(predict_file, 'w') as json_file:\n", " import json\n", " json.dump(json_dict, json_file, indent=2)\n", " \n", "# Display Inference Results as HTML Table\n", "def display_results(predict_file, output_prediction_file):\n", " import json\n", " from IPython.display import display, HTML\n", "\n", " # Here we show only the prediction results, nbest prediction is also available in the output directory\n", " results = \"\"\n", " with open(predict_file, 'r') as query_file:\n", " queries = json.load(query_file)\n", " input_data = queries[\"data\"]\n", " with open(output_prediction_file, 'r') as result_file:\n", " data = json.load(result_file)\n", " for entry in input_data:\n", " for paragraph in entry[\"paragraphs\"]:\n", " for qa in paragraph[\"qas\"]:\n", " results += \"{}{}{}\".format(qa[\"id\"], qa[\"question\"], data[qa[\"id\"]])\n", "\n", " display(HTML(\"{}
IdQuestionAnswer
\".format(results)))" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "pLdBPppf8AzV" }, "source": [ "## 3. BERT Inference: Question Answering\n", "\n", "We can run inference on a fine-tuned BERT model for tasks like Question Answering.\n", "\n", "Here we use a BERT model fine-tuned on a [SQuaD 2.0 Dataset](https://rajpurkar.github.io/SQuAD-explorer/) which contains 100,000+ question-answer pairs on 500+ articles combined with over 50,000 new, unanswerable questions." ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "J-jHuLNk8AzW" }, "source": [ "### Paragraph and Queries\n", "\n", "In this example we will ask our BERT model questions related to the following paragraph:\n", "\n", "**The Apollo Program**\n", "_\"The Apollo program, also known as Project Apollo, was the third United States human spaceflight program carried out by the National Aeronautics and Space Administration (NASA), which accomplished landing the first humans on the Moon from 1969 to 1972. First conceived during Dwight D. Eisenhower's administration as a three-man spacecraft to follow the one-man Project Mercury which put the first Americans in space, Apollo was later dedicated to President John F. Kennedy's national goal of landing a man on the Moon and returning him safely to the Earth by the end of the 1960s, which he proposed in a May 25, 1961, address to Congress. Project Mercury was followed by the two-man Project Gemini. The first manned flight of Apollo was in 1968. Apollo ran from 1961 to 1972, and was supported by the two-man Gemini program which ran concurrently with it from 1962 to 1966. Gemini missions developed some of the space travel techniques that were necessary for the success of the Apollo missions. Apollo used Saturn family rockets as launch vehicles. Apollo/Saturn vehicles were also used for an Apollo Applications Program, which consisted of Skylab, a space station that supported three manned missions in 1973-74, and the Apollo-Soyuz Test Project, a joint Earth orbit mission with the Soviet Union in 1975.\"_\n", "\n", " \n", "---\n", "\n", "The paragraph and the questions can be easily customized by changing the code below:\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "dr_eMAtfSN5R" }, "outputs": [], "source": [ "# Create BERT input file with (1) context and (2) questions to be answered based on that context\n", "predict_file = root_dir+'config.qa/input.json'" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 34 }, "colab_type": "code", "id": "LcOfv3dn8AzX", "outputId": "ae803153-071a-4d5e-82df-8f684c9d0ab6" }, "outputs": [], "source": [ "%%writefile $predict_file\n", "{\"data\": \n", " [\n", " {\"title\": \"Project Apollo\",\n", " \"paragraphs\": [\n", " {\"context\":\"The Apollo program, also known as Project Apollo, was the third United States human spaceflight program carried out by the National Aeronautics and Space Administration (NASA), which accomplished landing the first humans on the Moon from 1969 to 1972. First conceived during Dwight D. Eisenhower's administration as a three-man spacecraft to follow the one-man Project Mercury which put the first Americans in space, Apollo was later dedicated to President John F. Kennedy's national goal of landing a man on the Moon and returning him safely to the Earth by the end of the 1960s, which he proposed in a May 25, 1961, address to Congress. Project Mercury was followed by the two-man Project Gemini. The first manned flight of Apollo was in 1968. Apollo ran from 1961 to 1972, and was supported by the two man Gemini program which ran concurrently with it from 1962 to 1966. Gemini missions developed some of the space travel techniques that were necessary for the success of the Apollo missions. Apollo used Saturn family rockets as launch vehicles. Apollo/Saturn vehicles were also used for an Apollo Applications Program, which consisted of Skylab, a space station that supported three manned missions in 1973-74, and the Apollo-Soyuz Test Project, a joint Earth orbit mission with the Soviet Union in 1975.\", \n", " \"qas\": [\n", " { \"question\": \"What project put the first Americans into space?\", \n", " \"id\": \"Q1\"\n", " },\n", " { \"question\": \"What program was created to carry out these projects and missions?\",\n", " \"id\": \"Q2\"\n", " },\n", " { \"question\": \"What year did the first manned Apollo flight occur?\",\n", " \"id\": \"Q3\"\n", " }, \n", " { \"question\": \"What President is credited with the notion of putting Americans on the moon?\",\n", " \"id\": \"Q4\"\n", " },\n", " { \"question\": \"Who did the U.S. collaborate with on an Earth orbit mission in 1975?\",\n", " \"id\": \"Q5\"\n", " },\n", " { \"question\": \"How long did Project Apollo run?\",\n", " \"id\": \"Q6\"\n", " }, \n", " { \"question\": \"What program helped develop space travel techniques that Project Apollo used?\",\n", " \"id\": \"Q7\"\n", " }, \n", " {\"question\": \"What space station supported three manned missions in 1973-1974?\",\n", " \"id\": \"Q8\"\n", " }\n", "]}]}]}" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "VNPDdF_f8Azq" }, "source": [ "## 4. Running Question/Answer Inference\n", "\n", "To run QA inference we will launch the script run_squad.py with the following parameters:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "jNA4ezvR8Azr" }, "outputs": [], "source": [ "import os\n", "\n", "# This specifies the model architecture.\n", "bert_config_file = root_dir+'config.qa/bert_config.json'\n", "\n", "# The vocabulary file that the BERT model was trained on.\n", "vocab_file = root_dir+'config.qa/vocab.txt'\n", "\n", "# Initiate checkpoint to the fine-tuned BERT Large model\n", "init_checkpoint = os.path.join(root_dir+'data/finetuned_large_model_SQUAD2.0/model.ckpt')\n", "\n", "# Create the output directory where all the results are saved.\n", "output_dir = root_dir+'results'\n", "output_prediction_file = os.path.join(output_dir,'predictions.json')\n", " \n", "# Whether to lower case the input - True for uncased models / False for cased models.\n", "do_lower_case = True\n", " \n", "# Total batch size for predictions\n", "predict_batch_size = 8\n", "\n", "# Whether to run eval on the dev set.\n", "do_predict = True\n", "\n", "# When splitting up a long document into chunks, how much stride to take between chunks.\n", "doc_stride = 128\n", "\n", "# The maximum total input sequence length after WordPiece tokenization.\n", "# Sequences longer than this will be truncated, and sequences shorter than this will be padded.\n", "max_seq_length = 384" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "TcC81ooQ8Azt" }, "source": [ "### 4a. Run Inference" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "python_file=root_dir+\"run_squad.py\"\n", "python_file" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 1000 }, "colab_type": "code", "id": "No3_W3fd8Azt", "outputId": "552205fd-4e3e-48a8-898b-836412062d1d" }, "outputs": [], "source": [ "# Ask BERT questions\n", "!python $python_file \\\n", " --bert_config_file=$bert_config_file \\\n", " --vocab_file=$vocab_file \\\n", " --init_checkpoint=$init_checkpoint \\\n", " --output_dir=$output_dir \\\n", " --do_predict=$do_predict \\\n", " --predict_file=$predict_file \\\n", " --predict_batch_size=$predict_batch_size \\\n", " --doc_stride=$doc_stride \\\n", " --max_seq_length=$max_seq_length" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "ELf0wtQ08Azw" }, "source": [ "### 4b. Display Results:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "lZ0OZclQ8Azw" }, "outputs": [], "source": [ "display_results(predict_file, output_prediction_file)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "NH0Umn_e6Jsz" }, "source": [ "
\n", " Click to reveal expected answers to the questions above\n", " \n", "| Id | Question | Answer |\n", "|----|----------|--------|\n", "| Q1 | What project put the first Americans into space? | Project Mercury |\n", "| Q2 | What program was created to carry out these projects and missions? | The Apollo program |\n", "| Q3 | What year did the first manned Apollo flight occur? | 1968 |\n", "| Q4 | What President is credited with the notion of putting Americans on the moon?\t | John F. Kennedy |\n", "| Q5 | Who did the U.S. collaborate with on an Earth orbit mission in 1975? | Soviet Union |\n", "| Q6 | How long did Project Apollo run? | 1961 to 1972 |\n", "| Q7 | What program helped develop space travel techniques that Project Apollo used? | Gemini missions |\n", "| Q8 | What space station supported three manned missions in 1973-1974? | Skylab |\n", "\n", "
" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "sQ8EfbPm8Azz" }, "source": [ "## 5. Custom Inputs" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "OHWl7yus8Azz" }, "source": [ "Now that you are familiar with running QA Inference on BERT, you may want to try\n", "your own paragraphs and queries.\n", "\n", "\n", "1. Copy and paste your context from Wikipedia, news articles, etc. when prompted below\n", "2. Enter questions based on the context when prompted below.\n", "3. Run the inference script\n", "4. Display the inference results" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "mvnB1JUpWV_a" }, "outputs": [], "source": [ "predict_file = root_dir+'config.qa/custom_input.json'\n", "num_questions = 4 # You can configure this number" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "ryd1akIpBaKz" }, "outputs": [], "source": [ "# Create your own context to ask questions about.\n", "context = input(\"Paste your context here: \")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "fEalqfQXnZDT" }, "outputs": [], "source": [ "# Get questions from user input\n", "questions = [input(\"Question {}/{}: \".format(i+1, num_questions)) for i in range(num_questions)]\n", "# Format questions and write to JSON input file\n", "qinputs = [{ \"question\":q, \"id\":\"Q{}\".format(i+1)} for i,q in enumerate(questions)]\n", "write_input_file(context, qinputs, predict_file)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "X_RbzPEGWeWE" }, "outputs": [], "source": [ "# Ask BERT questions\n", "!python $python_file \\\n", " --bert_config_file=$bert_config_file \\\n", " --vocab_file=$vocab_file \\\n", " --init_checkpoint=$init_checkpoint \\\n", " --output_dir=$output_dir \\\n", " --do_predict=$do_predict \\\n", " --predict_file=$predict_file \\\n", " --predict_batch_size=$predict_batch_size \\\n", " --doc_stride=$doc_stride \\\n", " --max_seq_length=$max_seq_length" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "aMnxQZb_WiUN" }, "outputs": [], "source": [ "display_results(predict_file, output_prediction_file)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "accelerator": "GPU", "colab": { "collapsed_sections": [], "name": "BERT_colab_demo.ipynb", "provenance": [], "toc_visible": true, "version": "0.3.2" }, "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.10" } }, "nbformat": 4, "nbformat_minor": 4 }