1{ 2 "cells": [ 3 { 4 "cell_type": "markdown", 5 "metadata": { 6 "id": "c8Cx-rUMVX25" 7 }, 8 "source": [ 9 "##### Copyright 2020 The TensorFlow Authors." 10 ] 11 }, 12 { 13 "cell_type": "code", 14 "execution_count": null, 15 "metadata": { 16 "cellView": "form", 17 "id": "I9sUhVL_VZNO" 18 }, 19 "outputs": [], 20 "source": [ 21 "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", 22 "# you may not use this file except in compliance with the License.\n", 23 "# You may obtain a copy of the License at\n", 24 "#\n", 25 "# https://www.apache.org/licenses/LICENSE-2.0\n", 26 "#\n", 27 "# Unless required by applicable law or agreed to in writing, software\n", 28 "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", 29 "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", 30 "# See the License for the specific language governing permissions and\n", 31 "# limitations under the License." 32 ] 33 }, 34 { 35 "cell_type": "markdown", 36 "metadata": { 37 "id": "6Y8E0lw5eYWm" 38 }, 39 "source": [ 40 "# Post-training integer quantization with int16 activations" 41 ] 42 }, 43 { 44 "cell_type": "markdown", 45 "metadata": { 46 "id": "CGuqeuPSVNo-" 47 }, 48 "source": [ 49 "\u003ctable class=\"tfo-notebook-buttons\" align=\"left\"\u003e\n", 50 " \u003ctd\u003e\n", 51 " \u003ca target=\"_blank\" href=\"https://www.tensorflow.org/lite/performance/post_training_integer_quant_16x8\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" /\u003eView on TensorFlow.org\u003c/a\u003e\n", 52 " \u003c/td\u003e\n", 53 " \u003ctd\u003e\n", 54 " \u003ca target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/post_training_integer_quant_16x8.ipynb\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" /\u003eRun in Google Colab\u003c/a\u003e\n", 55 " \u003c/td\u003e\n", 56 " \u003ctd\u003e\n", 57 " \u003ca target=\"_blank\" href=\"https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/post_training_integer_quant_16x8.ipynb\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" /\u003eView source on GitHub\u003c/a\u003e\n", 58 " \u003c/td\u003e\n", 59 " \u003ctd\u003e\n", 60 " \u003ca href=\"https://storage.googleapis.com/tensorflow_docs/tensorflow/lite/g3doc/performance/post_training_integer_quant_16x8.ipynb\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/download_logo_32px.png\" /\u003eDownload notebook\u003c/a\u003e\n", 61 " \u003c/td\u003e\n", 62 "\u003c/table\u003e" 63 ] 64 }, 65 { 66 "cell_type": "markdown", 67 "metadata": { 68 "id": "BTC1rDAuei_1" 69 }, 70 "source": [ 71 "## Overview\n", 72 "\n", 73 "[TensorFlow Lite](https://www.tensorflow.org/lite/) now supports\n", 74 "converting activations to 16-bit integer values and weights to 8-bit integer values during model conversion from TensorFlow to TensorFlow Lite's flat buffer format. We refer to this mode as the \"16x8 quantization mode\". This mode can improve accuracy of the quantized model significantly, when activations are sensitive to the quantization, while still achieving almost 3-4x reduction in model size. Moreover, this fully quantized model can be consumed by integer-only hardware accelerators. \n", 75 "\n", 76 "Some examples of models that benefit from this mode of the post-training quantization include: \n", 77 "* super-resolution, \n", 78 "* audio signal processing such\n", 79 "as noise cancelling and beamforming, \n", 80 "* image de-noising, \n", 81 "* HDR reconstruction\n", 82 "from a single image\n", 83 "\n", 84 "In this tutorial, you train an MNIST model from scratch, check its accuracy in TensorFlow, and then convert the model into a Tensorflow Lite flatbuffer using this mode. At the end you check the accuracy of the converted model and compare it to the original float32 model. Note that this example demonstrates the usage of this mode and doesn't show benefits over other available quantization techniques in TensorFlow Lite." 85 ] 86 }, 87 { 88 "cell_type": "markdown", 89 "metadata": { 90 "id": "2XsEP17Zelz9" 91 }, 92 "source": [ 93 "## Build an MNIST model" 94 ] 95 }, 96 { 97 "cell_type": "markdown", 98 "metadata": { 99 "id": "dDqqUIZjZjac" 100 }, 101 "source": [ 102 "### Setup" 103 ] 104 }, 105 { 106 "cell_type": "code", 107 "execution_count": null, 108 "metadata": { 109 "id": "gyqAw1M9lyab" 110 }, 111 "outputs": [], 112 "source": [ 113 "import logging\n", 114 "logging.getLogger(\"tensorflow\").setLevel(logging.DEBUG)\n", 115 "\n", 116 "import tensorflow as tf\n", 117 "from tensorflow import keras\n", 118 "import numpy as np\n", 119 "import pathlib" 120 ] 121 }, 122 { 123 "cell_type": "markdown", 124 "metadata": { 125 "id": "srTSFKjn1tMp" 126 }, 127 "source": [ 128 "Check that the 16x8 quantization mode is available " 129 ] 130 }, 131 { 132 "cell_type": "code", 133 "execution_count": null, 134 "metadata": { 135 "id": "c6nb7OPlXs_3" 136 }, 137 "outputs": [], 138 "source": [ 139 "tf.lite.OpsSet.EXPERIMENTAL_TFLITE_BUILTINS_ACTIVATIONS_INT16_WEIGHTS_INT8" 140 ] 141 }, 142 { 143 "cell_type": "markdown", 144 "metadata": { 145 "id": "eQ6Q0qqKZogR" 146 }, 147 "source": [ 148 "### Train and export the model" 149 ] 150 }, 151 { 152 "cell_type": "code", 153 "execution_count": null, 154 "metadata": { 155 "id": "hWSAjQWagIHl" 156 }, 157 "outputs": [], 158 "source": [ 159 "# Load MNIST dataset\n", 160 "mnist = keras.datasets.mnist\n", 161 "(train_images, train_labels), (test_images, test_labels) = mnist.load_data()\n", 162 "\n", 163 "# Normalize the input image so that each pixel value is between 0 to 1.\n", 164 "train_images = train_images / 255.0\n", 165 "test_images = test_images / 255.0\n", 166 "\n", 167 "# Define the model architecture\n", 168 "model = keras.Sequential([\n", 169 " keras.layers.InputLayer(input_shape=(28, 28)),\n", 170 " keras.layers.Reshape(target_shape=(28, 28, 1)),\n", 171 " keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation=tf.nn.relu),\n", 172 " keras.layers.MaxPooling2D(pool_size=(2, 2)),\n", 173 " keras.layers.Flatten(),\n", 174 " keras.layers.Dense(10)\n", 175 "])\n", 176 "\n", 177 "# Train the digit classification model\n", 178 "model.compile(optimizer='adam',\n", 179 " loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n", 180 " metrics=['accuracy'])\n", 181 "model.fit(\n", 182 " train_images,\n", 183 " train_labels,\n", 184 " epochs=1,\n", 185 " validation_data=(test_images, test_labels)\n", 186 ")" 187 ] 188 }, 189 { 190 "cell_type": "markdown", 191 "metadata": { 192 "id": "5NMaNZQCkW9X" 193 }, 194 "source": [ 195 "For the example, you trained the model for just a single epoch, so it only trains to ~96% accuracy." 196 ] 197 }, 198 { 199 "cell_type": "markdown", 200 "metadata": { 201 "id": "xl8_fzVAZwOh" 202 }, 203 "source": [ 204 "### Convert to a TensorFlow Lite model\n", 205 "\n", 206 "Using the Python [TFLiteConverter](https://www.tensorflow.org/lite/convert/python_api), you can now convert the trained model into a TensorFlow Lite model.\n", 207 "\n", 208 "Now, convert the model using `TFliteConverter` into default float32 format:" 209 ] 210 }, 211 { 212 "cell_type": "code", 213 "execution_count": null, 214 "metadata": { 215 "id": "_i8B2nDZmAgQ" 216 }, 217 "outputs": [], 218 "source": [ 219 "converter = tf.lite.TFLiteConverter.from_keras_model(model)\n", 220 "tflite_model = converter.convert()" 221 ] 222 }, 223 { 224 "cell_type": "markdown", 225 "metadata": { 226 "id": "F2o2ZfF0aiCx" 227 }, 228 "source": [ 229 "Write it out to a `.tflite` file:" 230 ] 231 }, 232 { 233 "cell_type": "code", 234 "execution_count": null, 235 "metadata": { 236 "id": "vptWZq2xnclo" 237 }, 238 "outputs": [], 239 "source": [ 240 "tflite_models_dir = pathlib.Path(\"/tmp/mnist_tflite_models/\")\n", 241 "tflite_models_dir.mkdir(exist_ok=True, parents=True)" 242 ] 243 }, 244 { 245 "cell_type": "code", 246 "execution_count": null, 247 "metadata": { 248 "id": "Ie9pQaQrn5ue" 249 }, 250 "outputs": [], 251 "source": [ 252 "tflite_model_file = tflite_models_dir/\"mnist_model.tflite\"\n", 253 "tflite_model_file.write_bytes(tflite_model)" 254 ] 255 }, 256 { 257 "cell_type": "markdown", 258 "metadata": { 259 "id": "7BONhYtYocQY" 260 }, 261 "source": [ 262 "To instead quantize the model to 16x8 quantization mode, first set the `optimizations` flag to use default optimizations. Then specify that 16x8 quantization mode is the required supported operation in the target specification:" 263 ] 264 }, 265 { 266 "cell_type": "code", 267 "execution_count": null, 268 "metadata": { 269 "id": "HEZ6ET1AHAS3" 270 }, 271 "outputs": [], 272 "source": [ 273 "converter.optimizations = [tf.lite.Optimize.DEFAULT]\n", 274 "converter.target_spec.supported_ops = [tf.lite.OpsSet.EXPERIMENTAL_TFLITE_BUILTINS_ACTIVATIONS_INT16_WEIGHTS_INT8]" 275 ] 276 }, 277 { 278 "cell_type": "markdown", 279 "metadata": { 280 "id": "zLxQwZq9CpN7" 281 }, 282 "source": [ 283 "As in the case of int8 post-training quantization, it is possible to produce a fully integer quantized model by setting converter options `inference_input(output)_type` to tf.int16." 284 ] 285 }, 286 { 287 "cell_type": "markdown", 288 "metadata": { 289 "id": "yZekFJC5-fOG" 290 }, 291 "source": [ 292 "Set the calibration data:" 293 ] 294 }, 295 { 296 "cell_type": "code", 297 "execution_count": null, 298 "metadata": { 299 "id": "Y3a6XFqvHbYM" 300 }, 301 "outputs": [], 302 "source": [ 303 "mnist_train, _ = tf.keras.datasets.mnist.load_data()\n", 304 "images = tf.cast(mnist_train[0], tf.float32) / 255.0\n", 305 "mnist_ds = tf.data.Dataset.from_tensor_slices((images)).batch(1)\n", 306 "def representative_data_gen():\n", 307 " for input_value in mnist_ds.take(100):\n", 308 " # Model has only one input so each data point has one element.\n", 309 " yield [input_value]\n", 310 "converter.representative_dataset = representative_data_gen" 311 ] 312 }, 313 { 314 "cell_type": "markdown", 315 "metadata": { 316 "id": "xW84iMYjHd9t" 317 }, 318 "source": [ 319 "Finally, convert the model as usual. Note, by default the converted model will still use float input and outputs for invocation convenience." 320 ] 321 }, 322 { 323 "cell_type": "code", 324 "execution_count": null, 325 "metadata": { 326 "id": "yuNfl3CoHNK3" 327 }, 328 "outputs": [], 329 "source": [ 330 "tflite_16x8_model = converter.convert()\n", 331 "tflite_model_16x8_file = tflite_models_dir/\"mnist_model_quant_16x8.tflite\"\n", 332 "tflite_model_16x8_file.write_bytes(tflite_16x8_model)" 333 ] 334 }, 335 { 336 "cell_type": "markdown", 337 "metadata": { 338 "id": "PhMmUTl4sbkz" 339 }, 340 "source": [ 341 "Note how the resulting file is approximately `1/3` the size." 342 ] 343 }, 344 { 345 "cell_type": "code", 346 "execution_count": null, 347 "metadata": { 348 "id": "JExfcfLDscu4" 349 }, 350 "outputs": [], 351 "source": [ 352 "!ls -lh {tflite_models_dir}" 353 ] 354 }, 355 { 356 "cell_type": "markdown", 357 "metadata": { 358 "id": "L8lQHMp_asCq" 359 }, 360 "source": [ 361 "## Run the TensorFlow Lite models" 362 ] 363 }, 364 { 365 "cell_type": "markdown", 366 "metadata": { 367 "id": "-5l6-ciItvX6" 368 }, 369 "source": [ 370 "Run the TensorFlow Lite model using the Python TensorFlow Lite Interpreter." 371 ] 372 }, 373 { 374 "cell_type": "markdown", 375 "metadata": { 376 "id": "Ap_jE7QRvhPf" 377 }, 378 "source": [ 379 "### Load the model into the interpreters" 380 ] 381 }, 382 { 383 "cell_type": "code", 384 "execution_count": null, 385 "metadata": { 386 "id": "Jn16Rc23zTss" 387 }, 388 "outputs": [], 389 "source": [ 390 "interpreter = tf.lite.Interpreter(model_path=str(tflite_model_file))\n", 391 "interpreter.allocate_tensors()" 392 ] 393 }, 394 { 395 "cell_type": "code", 396 "execution_count": null, 397 "metadata": { 398 "id": "J8Pztk1mvNVL" 399 }, 400 "outputs": [], 401 "source": [ 402 "interpreter_16x8 = tf.lite.Interpreter(model_path=str(tflite_model_16x8_file))\n", 403 "interpreter_16x8.allocate_tensors()" 404 ] 405 }, 406 { 407 "cell_type": "markdown", 408 "metadata": { 409 "id": "2opUt_JTdyEu" 410 }, 411 "source": [ 412 "### Test the models on one image" 413 ] 414 }, 415 { 416 "cell_type": "code", 417 "execution_count": null, 418 "metadata": { 419 "id": "AKslvo2kwWac" 420 }, 421 "outputs": [], 422 "source": [ 423 "test_image = np.expand_dims(test_images[0], axis=0).astype(np.float32)\n", 424 "\n", 425 "input_index = interpreter.get_input_details()[0][\"index\"]\n", 426 "output_index = interpreter.get_output_details()[0][\"index\"]\n", 427 "\n", 428 "interpreter.set_tensor(input_index, test_image)\n", 429 "interpreter.invoke()\n", 430 "predictions = interpreter.get_tensor(output_index)" 431 ] 432 }, 433 { 434 "cell_type": "code", 435 "execution_count": null, 436 "metadata": { 437 "id": "XZClM2vo3_bm" 438 }, 439 "outputs": [], 440 "source": [ 441 "import matplotlib.pylab as plt\n", 442 "\n", 443 "plt.imshow(test_images[0])\n", 444 "template = \"True:{true}, predicted:{predict}\"\n", 445 "_ = plt.title(template.format(true= str(test_labels[0]),\n", 446 " predict=str(np.argmax(predictions[0]))))\n", 447 "plt.grid(False)" 448 ] 449 }, 450 { 451 "cell_type": "code", 452 "execution_count": null, 453 "metadata": { 454 "id": "3gwhv4lKbYZ4" 455 }, 456 "outputs": [], 457 "source": [ 458 "test_image = np.expand_dims(test_images[0], axis=0).astype(np.float32)\n", 459 "\n", 460 "input_index = interpreter_16x8.get_input_details()[0][\"index\"]\n", 461 "output_index = interpreter_16x8.get_output_details()[0][\"index\"]\n", 462 "\n", 463 "interpreter_16x8.set_tensor(input_index, test_image)\n", 464 "interpreter_16x8.invoke()\n", 465 "predictions = interpreter_16x8.get_tensor(output_index)" 466 ] 467 }, 468 { 469 "cell_type": "code", 470 "execution_count": null, 471 "metadata": { 472 "id": "CIH7G_MwbY2x" 473 }, 474 "outputs": [], 475 "source": [ 476 "plt.imshow(test_images[0])\n", 477 "template = \"True:{true}, predicted:{predict}\"\n", 478 "_ = plt.title(template.format(true= str(test_labels[0]),\n", 479 " predict=str(np.argmax(predictions[0]))))\n", 480 "plt.grid(False)" 481 ] 482 }, 483 { 484 "cell_type": "markdown", 485 "metadata": { 486 "id": "LwN7uIdCd8Gw" 487 }, 488 "source": [ 489 "### Evaluate the models" 490 ] 491 }, 492 { 493 "cell_type": "code", 494 "execution_count": null, 495 "metadata": { 496 "id": "05aeAuWjvjPx" 497 }, 498 "outputs": [], 499 "source": [ 500 "# A helper function to evaluate the TF Lite model using \"test\" dataset.\n", 501 "def evaluate_model(interpreter):\n", 502 " input_index = interpreter.get_input_details()[0][\"index\"]\n", 503 " output_index = interpreter.get_output_details()[0][\"index\"]\n", 504 "\n", 505 " # Run predictions on every image in the \"test\" dataset.\n", 506 " prediction_digits = []\n", 507 " for test_image in test_images:\n", 508 " # Pre-processing: add batch dimension and convert to float32 to match with\n", 509 " # the model's input data format.\n", 510 " test_image = np.expand_dims(test_image, axis=0).astype(np.float32)\n", 511 " interpreter.set_tensor(input_index, test_image)\n", 512 "\n", 513 " # Run inference.\n", 514 " interpreter.invoke()\n", 515 "\n", 516 " # Post-processing: remove batch dimension and find the digit with highest\n", 517 " # probability.\n", 518 " output = interpreter.tensor(output_index)\n", 519 " digit = np.argmax(output()[0])\n", 520 " prediction_digits.append(digit)\n", 521 "\n", 522 " # Compare prediction results with ground truth labels to calculate accuracy.\n", 523 " accurate_count = 0\n", 524 " for index in range(len(prediction_digits)):\n", 525 " if prediction_digits[index] == test_labels[index]:\n", 526 " accurate_count += 1\n", 527 " accuracy = accurate_count * 1.0 / len(prediction_digits)\n", 528 "\n", 529 " return accuracy" 530 ] 531 }, 532 { 533 "cell_type": "code", 534 "execution_count": null, 535 "metadata": { 536 "id": "T5mWkSbMcU5z" 537 }, 538 "outputs": [], 539 "source": [ 540 "print(evaluate_model(interpreter))" 541 ] 542 }, 543 { 544 "cell_type": "markdown", 545 "metadata": { 546 "id": "Km3cY9ry8ZlG" 547 }, 548 "source": [ 549 "Repeat the evaluation on the 16x8 quantized model:" 550 ] 551 }, 552 { 553 "cell_type": "code", 554 "execution_count": null, 555 "metadata": { 556 "id": "-9cnwiPp6EGm" 557 }, 558 "outputs": [], 559 "source": [ 560 "# NOTE: This quantization mode is an experimental post-training mode,\n", 561 "# it does not have any optimized kernels implementations or\n", 562 "# specialized machine learning hardware accelerators. Therefore,\n", 563 "# it could be slower than the float interpreter.\n", 564 "print(evaluate_model(interpreter_16x8))" 565 ] 566 }, 567 { 568 "cell_type": "markdown", 569 "metadata": { 570 "id": "L7lfxkor8pgv" 571 }, 572 "source": [ 573 "In this example, you have quantized a model to 16x8 with no difference in the accuracy, but with the 3x reduced size.\n" 574 ] 575 } 576 ], 577 "metadata": { 578 "colab": { 579 "collapsed_sections": [], 580 "name": "post_training_integer_quant_16x8.ipynb", 581 "toc_visible": true 582 }, 583 "kernelspec": { 584 "display_name": "Python 3", 585 "name": "python3" 586 } 587 }, 588 "nbformat": 4, 589 "nbformat_minor": 0 590} 591